18 Aug, 2024

Basic Index Maintenance in SQL Server

Indexes are crucial in SQL Server for optimizing query performance. They work like a book’s index, enabling the SQL Server to locate data quickly without scanning the entire table. However, indexes can become fragmented or outdated over time, leading to performance degradation. This article outlines basic index maintenance tasks in SQL Server to ensure your indexes remain efficient and effective.

Understanding Index Fragmentation

Before diving into index maintenance, it’s important to understand fragmentation. Fragmentation occurs when the logical order of pages in an index does not match the physical order. This misalignment can slow down query performance because SQL Server needs to read more pages than necessary.

There are two main types of fragmentation:

  1. Internal Fragmentation: This happens when pages contain unused space due to page splits, where SQL Server divides a page into two because of data insertion.
  2. External Fragmentation: Occurs when the logical order of index pages does not match their physical order on disk.

Checking Index Fragmentation

Here’s a query to check the fragmentation:

This query returns the average fragmentation percentage of each index in the database. High fragmentation levels (generally above 30%) indicate that the index may need maintenance.

Reorganizing Indexes

Index Reorganization is a process that defragments an index by reordering the leaf-level pages to match the logical order. It’s a light, online operation that can be performed while the database is online and available to users. For more details, you can refer to Microsoft’s documentation on ALTER INDEX…REORGANIZE.

To reorganize an index, use the following command:

Or to reorganize all indexes on a table:

Reorganizing is recommended when fragmentation is between 5% and 30%.

Rebuilding Indexes

Index Rebuilding is a more intensive operation that drops and recreates the index. It removes fragmentation by rebuilding the entire index structure from scratch. This operation can be performed online or offline, but it requires more resources than reorganization. More information on this can be found in the Microsoft documentation on ALTER INDEX…REBUILD.

To rebuild an index, use the following command:

Or to rebuild all indexes on a table:

Rebuilding indexes is typically recommended when fragmentation exceeds 30%.

Automating Index Maintenance

Regular index maintenance is crucial, and automating this process can save time and ensure consistent performance. You can use SQL Server Agent to schedule regular index maintenance tasks.

Here’s a simple SQL script that can be scheduled:

This script will check the fragmentation level of each index and decide whether to reorganize or rebuild based on the fragmentation percentage.

Conclusion

Proper index maintenance is essential for optimal database performance. Regularly check your indexes for fragmentation and use the appropriate maintenance strategy—reorganize or rebuild—based on the level of fragmentation. Automating these tasks ensures your SQL Server database continues to perform efficiently without manual intervention. For more advanced techniques and options, you can refer to Microsoft’s documentation on index maintenance.

Do you need an external pair of eyes to review your SQL Server architecture?

Maybe your platform development and growth have been organic rather than going through a full enterprise architect design. You have good dev team but NO DBA input. Improvements are built by seeing the real world ‘failure’ as opposed to gaming it out at the DEV/QA stage.

I can offer a full platform evaluation, covering questions such as:

• what is your usage (potential required iops, memory, cpu, space requirements)

• Real world behaviour, usage patterns, etc.

• Routine maintenance review

• Backups / failover review

• UAT environment review

• monitoring / metrics review

Some of the questions I can investigate:

• Are you wrongly sized?

• Are you on the right platform?

• Are you making best use of what you have?

• What is the best model for your business?

• What level of DBA input do you need?

I can provide all of above via my two day “Health Check”. Click here for more info

14 Aug, 2024

What are the ‘master’, ‘msdb’, ‘model’ and ‘tempdb’ databases are in SQL Server, and do I need to worry about them ?

In SQL Server, four essential system databases—master, msdb, model, and tempdb—are foundational to the server’s functionality. Here’s what they do and whether you need to be concerned about them:

1. Master Database

The master database is the most critical database in SQL Server. It stores all the system-level information such as server configuration, logins, and metadata for all other databases on the server. If the master database is corrupted or lost, the SQL Server might not start. Yes, you need to worry about this database. Regular backups and careful management are crucial to maintaining the integrity of your SQL Server instance.

2. msdb Database

The msdb database supports SQL Server Agent, which manages scheduled tasks like backups, job scheduling, and alerts. It also stores information for Database Mail, SSIS packages, and other SQL Server features. While the msdb database is less critical than the master database, you should still monitor and back it up regularly to ensure that scheduled jobs and automation processes run smoothly.

3. Model Database

The model database serves as a template for creating new databases. Whenever you create a new database, it inherits settings, schemas, and objects from the model database. If you need all new databases to start with specific settings or structures, you would modify the model database. You generally don’t need to worry about this database unless you have specific requirements for new databases.

4. tempdb Database

The tempdb database is a temporary storage area used by SQL Server for operations like sorting, joining large datasets, and storing temporary tables. It’s recreated every time the server restarts, which means it only contains temporary data. However, you do need to monitor tempdb, as it can grow quickly and affect performance if not managed properly. Keeping an eye on its size and usage is important, especially under heavy workloads.

Should You Worry About Them?

  • Master: Absolutely, because it’s vital to the server’s operation.
  • msdb: Yes, particularly if your server relies on scheduled tasks and automation.
  • Model: Not usually, unless you need specific defaults for new databases.
  • tempdb: Yes, especially in high-traffic environments, to avoid performance issues.

In addition to understanding the roles of these system databases, it’s also important to consider their sizing and storage needs. The master and msdb databases are typically small in size, as they primarily store configuration data, system-level information, and job schedules. However, their importance warrants storing them on reliable and fast storage, but they don’t usually require a dedicated SSD unless you are running a very large and complex SQL Server instance.

The model database is also small, since it only serves as a template, so it doesn’t require much storage space and can comfortably reside on the same disk as the master and msdb databases.

The tempdb database, however, can grow significantly depending on the workload, especially in environments with heavy transactional processing, complex queries, or large data sorting operations. Because tempdb is frequently used and can become a bottleneck, placing it on a high-performance SSD is often recommended. For high-performance environments, tempdb should ideally be on its own SSD to ensure that its heavy read/write operations do not interfere with the performance of other databases.

In summary, while the master, msdb, and model databases do not typically require separate SSD storage, tempdb greatly benefits from being placed on a dedicated SSD, especially in high-traffic environments, to ensure optimal performance and avoid potential slowdowns.

The master and tempdb databases require careful attention, the msdb and model databases should also be regularly monitored and maintained to ensure the smooth operation of your SQL Server environment.

Do you need an external pair of eyes to review your SQL Server architecture?

Maybe your platform development and growth have been organic rather than going through a full enterprise architect design. You have good dev team but NO DBA input. Improvements are built by seeing the real world ‘failure’ as opposed to gaming it out at the DEV/QA stage.

I can offer a full platform evaluation, covering questions such as:

• what is your usage (potential required iops, memory, cpu, space requirements)

• Real world behaviour, usage patterns, etc.

• Routine maintenance review

• Backups / failover review

• UAT environment review

• monitoring / metrics review

Some of the questions I can investigate:

• Are you wrongly sized?

• Are you on the right platform?

• Are you making best use of what you have?

• What is the best model for your business?

• What level of DBA input do you need?

I can provide all of above via my two day “SQL Server Health Check” for just £950 plus VAT. Click here for more info

12 Aug, 2024

Building a Data Warehouse

Why use us for Data Warehousing and Data Engineering?

We have implemented many data warehouses in SQL Server, using SSIS and SQL stored procedures for ETL / ELT.

We’re highly experienced and proficient in T-SQL, stored procedures, triggers, SQL troubleshooting , and using SSIS to create fast, accurate and secure data pipelines.

More recently, we have been involved in Microsoft Power BI and DAX to create compelling reports and dashboards for all sizes of business. As part of this, we have used SSRS to create paginated detail reports.

Other roles include presenting, providing training, and passing on SQL Server / Power BI and DAX skills to other team members.

Our Data Warehousing Experience

For ETL/ ELT, we have written many, many SSIS and SQL scripts, including advanced data transformations, database change tracking, slowly changing dimensions (SCDs) custom scripts in VB and C#, connecting to disparate OLEDB and ODBC data sources, error tracking and deployment.

What are the Challenges when designing and implementing a Data Warehouse?

Designing a data warehouse requires a solid understanding of the organization’s data requirements, as well as the data sources that are available. The following are the key considerations in designing a data warehouse:

Data Sources
Determine the types of data sources that need to be included in the data warehouse, including transactional systems, legacy systems, and cloud-based systems. The data sources need to be integrated into the data warehouse to ensure that the data is consistent, accurate, and up-to-date.

Data Modeling
Define the data structure for the data warehouse, including the relationships between tables, data elements, and attributes. The data model should also take into account the organization’s data requirements, including the types of reports and analysis that will be performed.

ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform)
Develop an ETL or ELT process to extract data from the sources, transform the data into a format that is suitable for the data warehouse, and load the data into the data warehouse. This process should be automated and regularly scheduled to ensure that the data in the data warehouse is up-to-date.

Data Quality
Ensure the data quality of the data in the data warehouse by implementing data validation and data cleansing rules. This helps to ensure that the data is accurate and consistent, and minimizes the risk of incorrect data being used for analysis and reporting.

Performance
Ensure that the data warehouse is optimized for performance, including the indexing of data, the use of summary tables, and the use of materialized views. This helps to ensure that the data can be retrieved quickly and efficiently for analysis and reporting.

Security
Implement security measures to ensure that the data in the data warehouse is protected from unauthorized access, modification, and deletion. This includes access controls, data encryption, and audit trails.

Scalability
Ensure that the data warehouse can scale to accommodate growing data volumes and increased demand for analysis and reporting. This includes the use of distributed systems, data partitioning, and load balancing.

Maintenance
Develop a maintenance plan for the data warehouse, including regular backups, data archiving, and disaster recovery planning. This helps to ensure that the data warehouse is available and reliable for analysis and reporting.

10 Aug, 2024

Recovering an MSDB database

Recovering an MSDB database in SQL Server marked as ‘suspect’.

Recently, after the server reboot, we got a message that MSDB was in a suspect state. Panic ensued. We had to repair it.! We thought it would be fairly straightforward to fix., but it was actually more complex than we thought.

Attempt 1.: Restore from a backup.

Well, our first thought was to simply restore the database from last night’s backup. Right? Wrong! We logged a ticket with the server team, asking for the backup to be made available. However, it would seem that there was no backup for this server. (it’s only a dev server). So we needed another solution. Even though it was only a dev server, it contained many months of work, with new tables, stored procedures, and triggers existing only within it and nowhere else.

Attempt 2: Copy another SQL Server MSDB backup.

Our next thought was to restore a backup from a similar instance in SQL Server with the same version and build number. This solution will of course result in data loss. You also have to restore from T-SQL, rather than from SQL Server Management Studio GUI!

Unfortunately, we couldn’t find a backup that matched the dev server’s version and build number, so this solution was not possible either. We needed to try something else!

Attempt 3. Use a SQL Server database Template.

Fortunately, there is another solution. When you install SQL Server, the install process creates a template for each of the system databases. For our version of SQL Server (2022), the templates are stored in C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Binn\Templates

We were able to shut down the instance of SQL Server, copy the template, and restart. The SQL Server instance started, without any warnings or errors.
At this point, we still had to recreate things like jobs and mail configuration, as we didn’t have any backups of those as SQL scripts.
In conclusion, I guess that the point is of all this is that you need backups, and a proper disaster recovery procedure even just on a dev server. Because you never know when you might just save your skin.

7 Aug, 2024

The Importance of Comments, Neatness, and Indentation in Code

The Importance of Comments, Neatness, and Indentation in Code

Writing computer code is not just about making a program run; it’s about creating a clear, maintainable, and collaborative foundation for ongoing development. Three crucial elements in achieving this are comments, neatness, and indentation. These aspects are vital not only for the individual programmer but also for the cohesiveness of a development team.

Comments: Communicating Intent

Comments are the annotations in the code that explain what specific sections do and why certain decisions were made. They serve several important purposes:
1. Clarification: Comments help clarify complex logic, making it easier for others (or even the original coder at a later date) to understand the code quickly.
2. Documentation: Comments act as an internal documentation system, outlining how functions and modules interact. This documentation facilitates the easier onboarding of new team members.
3. Debugging: During debugging, comments can help identify the purpose of each code segment. This makes it easier to locate and fix issues.

Without comments, code can become a tangled mess, making it difficult to maintain and expand.

Neatness: Enhancing Readability

Neat code is organized code. Maintaining neatness means ensuring that the code is presented in a logical and structured manner. This includes:
1. Consistent Naming Conventions: Variables, functions, and classes should follow a naming convention that is agreed upon by the team. This enhances predictability and readability.
2. Logical Structure: Code should be grouped logically. Related functions and variables should be placed together to create a clear narrative.
3. Minimal Redundancy: Avoiding unnecessary repetition and keeping the code DRY (Don’t Repeat Yourself) helps in maintaining neatness.

Neat code is not only easier to read but also easier to debug and extend.

Indentation: Creating Visual Hierarchy

Indentation is the practice of using spaces or tabs to visually separate code blocks, such as loops, conditionals, and functions. Proper indentation:
1. Defines Scope: It clearly shows the structure of the code and the relationship between different blocks, which is essential for understanding the flow and logic.
2. Prevents Errors: Many programming languages rely on indentation to determine the grouping of statements, thus preventing syntax errors.
3. Improves Readability: Well-indented code is easier to scan and understand, making it more accessible to all team members.

Team Standardization: A Unified Approach

For comments, neatness, and indentation to be effective, the entire team must adhere to the same standards. This unified approach ensures:
1. Consistency: Consistent coding practices make it easier for team members to read and understand each other’s code.
2. Efficiency: A standardized codebase reduces the time spent on deciphering different coding styles. This allows more focus on development and problem-solving.
3. Maintainability: Uniform code is easier to maintain, test, and refactor, ensuring the long-term health of the project.

To achieve this, teams should establish and document a coding standard, which includes guidelines for comments, naming conventions, code structure, and indentation. Regular code reviews and automated tools like linters can help enforce these standards.

In conclusion, comments, neatness, and indentation are fundamental to writing high-quality code. They foster clear communication, enhance readability, and ensure the maintainability of the codebase. By adopting a standardized approach, development teams can work more effectively. This ensures that everyone is on the same page and contributing to a cohesive and robust software product.

5 Aug, 2024

The Great Coding Comma Debate

In the world of coding, seemingly minor stylistic choices can spark passionate debates (almost fights, if alcohol gets involved!) among developers.

One such debate centers around the placement of commas in multi-line lists or SQL Statements. Should commas be placed at the end of lines, or should they be at the start of the following line? While this may seem trivial to outsiders, the placement of commas can affect readability, version control, and overall code aesthetics.

Commas at the End of Lines
The traditional and most widely used approach is placing commas at the end of lines. This style is intuitive and mirrors the natural structure of sentences in written language, where commas typically follow words and phrases.

Example:
const user = {
firstName: ‘John’,
lastName: ‘Doe’,
age: 30,
email: ‘john.doe@example.com’
};

Advantages:
Readability: Mirrors natural language syntax, making it easier for newcomers to understand.
Widespread Adoption: Most codebases and style guides adopt this convention, leading to consistency across many projects.
Tooling Support: Many code editors and linters are configured to handle this style seamlessly.

Disadvantages:
Error-Prone: It’s easy to forget the comma on the last item, leading to syntax errors.
Difficult Diffs: Adding a new item requires modifying the previous line, which can complicate version control diffs.

Commas at the Start of Lines
An alternative approach is placing commas at the start of lines. This style is less common but has its proponents who argue it offers distinct advantages.

Example:
const user = {
firstName: ‘John’
, lastName: ‘Doe’
, age: 30
, email: ‘john.doe@example.com’
};

Advantages:
Clear Diffs: Adding or removing lines does not affect previous lines, resulting in cleaner version control diffs.

Visual Clarity: The start of each line indicates a continuation of the list, which some developers find enhances readability.

Disadvantages:
Unusual Syntax: This style is unconventional and can be jarring for developers accustomed to the traditional format.

The Bottom Line: Consistency is Key
While the debate over comma placement can be heated, the most important factor in any codebase is consistency. Whether a team opts for commas at the end of lines or at the start, adhering to a single style guide ensures that the code remains readable and maintainable for everyone involved. Consistent formatting reduces cognitive load, minimizes merge conflicts, and improves collaboration.

In conclusion, while personal and team preferences will influence the choice of comma placement, the ultimate goal is to maintain a uniform style throughout the project. As long as the entire team agrees on and adheres to a consistent convention, the specific placement of commas becomes a minor detail in the broader scope of effective and efficient programming.

23 Jul, 2024

Why the SQL TOP Statement Does Not Work in Oracle and What to Use Instead

close up of a man creating a mindmap on a whiteboard

Why the SQL TOP Statement Does Not Work in Oracle and What to Use Instead
Structured Query Language (SQL) is a fundamental tool for managing and querying relational databases. However, different database management systems (DBMS) have unique implementations and syntax variations. One such difference is the use of the TOP clause, which is prevalent in SQL Server but absent in Oracle Database. This article will explain why the TOP statement is not supported in Oracle and what alternatives can be used to achieve similar functionality.

The SQL TOP Statement: An Overview
In SQL Server, the TOP clause is used to limit the number of rows returned by a query. This is particularly useful for handling large datasets when only a subset of data is needed. The basic syntax for the TOP clause in SQL Server is:

SELECT TOP n * FROM table_name;

This command retrieves the first n rows from the specified table. The TOP clause is straightforward and convenient in SQL Server. However, this specific syntax is not recognized by Oracle Database.

Why Oracle Does Not Support the SQL TOP Statement
Oracle Database follows a different set of SQL standards and conventions. There are several reasons for the absence of the TOP clause in Oracle:

extreme close up photo of codes on screen

Adherence to SQL Standards: Oracle adheres closely to ANSI and ISO SQL standards, which do not include the TOP clause. The TOP clause is a proprietary feature introduced by Microsoft in SQL Server, and Oracle has chosen not to adopt this non-standard extension.

Alternative Mechanisms: Oracle offers other methods to achieve similar results as the TOP clause. These methods include the use of the ROWNUM pseudo-column, the FETCH FIRST clause (introduced in Oracle 12c), and the ROW_NUMBER() analytic function, each providing more standardized and flexible ways to limit query results.

Alternatives to the TOP Clause in Oracle
Using ROWNUM

The ROWNUM pseudo-column is an effective way to limit the number of rows returned by a query in Oracle. It assigns a unique number to each row in the result set, starting with 1. The syntax for using ROWNUM is:

SELECT * FROM table_name WHERE ROWNUM <= n
This query retrieves the first n rows from the table. However, it is important to note that ROWNUM is assigned before the ORDER BY clause is applied, which can affect the result if sorting is needed.

Using FETCH FIRST (Oracle 12c and Later)

Oracle 12c introduced the FETCH FIRST clause, which provides a more intuitive and standard-compliant way to limit rows. The syntax is:

SELECT * FROM table_name ORDER BY column_name FETCH FIRST n ROWS ONLY
This query returns the first n rows based on the specified order. The FETCH FIRST clause is straightforward and aligns with SQL standards, making it a preferred method for limiting rows in newer Oracle versions.

Using ROW_NUMBER() Analytic Function

The ROW_NUMBER() analytic function is another powerful tool for limiting rows in Oracle. It assigns a unique number to each row within a result set, allowing for precise control over row selection. The syntax is:

SELECT * FROM (
SELECT column_name, ROW_NUMBER() OVER (ORDER BY column_name) AS rn
FROM table_name
) WHERE rn <= n;

This query retrieves the first n rows after ordering the results by the specified column. The ROW_NUMBER() function offers great flexibility, especially for complex queries involving multiple columns and partitions.

Conclusion – Why the TOP statement is not supported in Oracle
The absence of the TOP clause in Oracle is due to its commitment to SQL standards and its design philosophy. However, Oracle provides robust alternatives to achieve similar functionality. The ROWNUM pseudo-column, the FETCH FIRST clause, and the ROW_NUMBER() analytic function each offer unique advantages, allowing users to efficiently limit the number of rows returned by their queries. By understanding and utilizing these alternatives, Oracle users can effectively manage large datasets and tailor their queries to meet specific needs.

23 Jul, 2022

Do I Need a Data Warehouse for Power BI ?

Last Updated : 23rd October 2024

If you have been using Power BI for a while now, you may have heard that you should have a data warehouse. But do you really need one? In the article, we’ll discuss the arguments for and against.

What’s a Data Warehouse?

A data warehouse (DW) is a database which has been designed to hold the data required for reporting. It contains tables which have been designed optimally for use for tools such as Power BI. A DW is populated usually by scheduled feeds from source systems using ETL (Extract, Transform and Load) processes.

Using Power BI without a data warehouse

Power Bi is a great tool. You can use it without any data warehouse at all. Just connect Power BI to some source databases, plus maybe some CSV’s, Excel sources, a few online services and away you go. You can connect the data sources together in Power Query and the Power BI table designer, and come up with some great looking reports in almost no time.

Advantages of using Power BI without a DW

This approach is fast and enables you to create reports and dashboards quickly. You can easily add new data sources and create reports with them within a few hours. All of the data modelling can be done in Power Query by less senior team members.

Stakeholders and end-users will love the way you’ll be able to get reports together quickly. You’ll be a data hero!

However, there is a price to be paid…..

Disadvantages of using Power BI without a DW

Having no data warehouse with Power BI can lead to different teams / individuals creating their own data models, sometimes with the same or similar data. This can mean different datasets and measures across the business and means that there are different versions of the truth.

Data governance can be ignored or sidelined, giving poor data quality and slow / inaccurate reports.

The data model can become complex very quickly. Without proper data warehouse design, the model can become ambiguous, slow, bloated and confusing. Links between tables can become something of a hack-a-thon, leading to very complex DAX code requiring multiple CROSSFILTER and TREATAS statements just to make measures work.

When multiples source systems are being brought together, it can be difficult to join primary / foreign keys. This can leads to (another) messy hack in Power query or DAX, just to make the valid data joins.

Historical change tracking is very difficult, unless the source systems have the correct table structure to facilitate it. For instance, tracking customer address changes over time is often not readily available from the source system and is therefore lost in Power BI.

Using Power BI with a data warehouse

Advantages of using Power BI with a DW

The data warehouse can be designed using industry best-practice, to maximize data accuracy, performance, security and availability.

There will be one version of the truth for all data within the data warehouse.

The data warehouse can be re-used outside of Power BI and the immediate reporting environment.

Advanced techniques such as Slowly Changing Dimensions can be used, for instance to track historical customer data.

Once in place, it will be possible to create reports very quickly and easily.

Disadvantages of using Power BI with a DW

Building a data warehouse can take several months. It is a skilled process requiring data engineers and data analysts, with all of the specific skills and tools that they require to do the job. This costs money and time and will require resources such as Microsoft Azure and on-site connectors.

So, do you need a Data Warehouse?

Well, it really depends on your own circumstances.

Sometimes, just using Power BI with Power Query can be sufficient. If you want to create some compelling reports quickly from one source system, together with a few CSV or Excel files, then that may be all you need.

Larger organizations with multiple source systems or more complex reporting requirements would benefit from a properly designed, fast, accurate data warehouse. It will contain well-defined ETL processes, proper data governance, and a data model based, to some degree at least, on Kimball.

Consultancy & Pricing

1 Jul, 2022

Investigating SQL Server performance issues

Had an interesting morning trouble shooting SQL Server performance issues for a client.

They had SSMS version 12, so we used the ‘Activity Monitor’ tool. To access the tool, right click on the server name in SQL Server Management Studio and you’ll see ‘Activity Monitor’.

Looking at the long running queries tab for about 15 minutes, we identified five queries which were each taking between 40-50 seconds.

SQL Server Activity Monitor

On further inspection, some of the queries were joining tables of 2.5million rows. We therefore added some clustered indexes to try and speed up the queries. This managed to reduce the query down to 15 seconds, but we in the end we decided to ask the software vendor if they could rewrite the query, or break it into several queries, to try and improve performance further.

17 Mar, 2014

Script to get row counts for tables in a database

Script to get row counts for tables in a database

This is a great little piece of SQL which does something really useful. It displays the tables in a database together with the row counts for each, in descending order of size. This is useful if you are performing some database maintenance and want to know which tables are using the most space.

Note : is_ms_shipped = 0 is used so that system tables are not included.

SELECT objs.name,
part_stats.row_count
FROM sys.indexes AS indxs
INNER JOIN sys.dm_db_partition_stats AS part_stats ON indxs.OBJECT_ID = part_stats.OBJECT_ID
INNER JOIN sys.objects AS objs ON indxs.OBJECT_ID = objs.OBJECT_ID
AND indxs.index_id = part_stats.index_id
WHERE indxs.index_id < 2 AND objs.is_ms_shipped = 0 ORDER BY part_stats.row_count desc