From the course: Designing Database Solutions for SQL Server 2016

Translate business needs to structures

- [Instructor] So your organization has determined that a database is required to support a particular business need. Where do you start? The answer is, with a lot of planning and the development of something called a logical database model. Preparing to build a database begins with fully understanding the business requirements for the system, and the expected capabilities that it's to perform. For instance, when a customer places an order for a product, the database will first need to verify stock on hand, deduct the order quantity from the current inventory, check pricing and shipping costs, generate an invoice, collect billing and shipping details from the customer, prepare the shipping manifest, and finally, send the order to the warehouse for fulfillment. The process of identifying all of those moving parts of the system, comes in the form of a logical model of the business's needs. With a logical model, you're concerned with the tasks that the database must perform, not how the database will perform them. Information on how the business operates, the types of data that's generated, and the rules that govern the validity of the data need to be organized and understood. The end result of this research will typically be something called an entity relationship diagram, or an ERD. The ERD describes all the data that is collected and how the data relates to one another. Another component, the process diagram, will then take that information and further describe how the information moves about throughout the system. With a comprehensive list of requirements in hand, you can get to work modeling how the system will operate. If you have any prior experience developing databases, then it's almost certain that you've come across the concept of data normalization. Normalization rules outline how data should be structured to maximize storage and retrieval efficiency. The normalization process breaks data down into discrete components and ensures that data tables include proper key values to aid in data retrieval. But sometimes, what's efficient for a computer is at odds with what's convenient for a human user. If a database user needs to regularly piece together data columns from several related data tables based on internally managed primary and foreign key pairings, then the process of getting answers can be a little bit cumbersome. SQL Server provides a number of options to help bridge this gap between efficiency and convenience that should be considered when developing a database's structure, all of which will still allow you to follow database normalization best practices. The first option is to leverage indexed views on the database's design. In a typical view object, SQL Server stores the instructions for how to piece together various data tables based on key value pairs. Users can query these views as they would any normal data table and they effectively erase the experience of pulling information out from de-normalized tables. Behind the scenes, though, SQL Server will still need to perform multiple table look-ups, and performs no differently than if the user had issued the more complex SQL query that the view encompasses. Instead, with an indexed view, SQL Server maintains a duplicate copy of the de-normalized data within the view object itself. This is a process called materialization. Materialization does create a performance hit when the data is written to the underlying tables, since the indexed view must be updated as well. However, when querying information out of an indexed view, SQL Server doesn't need to return to the tables and results can be retrieved more quickly. View objects can also be created across partitioned tables. With partitioned tables, the records are divided horizontally, based off of some criteria. For instance, each region of a company will store their own records on local servers in tabled partitions, but the home office can view all these distributed partitions together as if they were a single table. Distributed partition views can then be targeted to specific regions, without needing to pull records out from the entire company. Another option to increase retrieval convenience is to create additional indexes directly on the tables that only include a sub-set of the available data. Indexes, by definition, include copies of the data values that they index. If queries are performed on the database that typically only target a small sub-set of records, for instance, the vast majority of queries to an orders table are for orders placed within the last three days, then the database administrator should take advantage of a filtered index built to that use case. A filtered index can be re-generated nightly to only include orders placed in the last 72 hours. Any queries within that timeframe can leverage the index, and queries for orders outside of that timeframe will return to the full record set in the original table. SQL Server also supports the building of additional indexes on non-key columns. A default clustered index is usually included on a table's primary key, but this isn't a requirement. In some situations, it might make more sense to create the clustered index on a non-key column. If they're used in table joins, it queries more often. Or, additional non-clustered indexes can be added to any columns that appear regularly in queries and joins. SQL Server provides a number of resources to identify where these indexes would best be deployed, and we'll look at some of them later in the course. So with a little insight into the capabilities at the disposal of a SQL Server database administrator, a database's structure can be developed that meets the needs of the business while providing end users with convenient access to the information being stored.

Contents