1 Oct, 2024

Seven Reasons Why SSRS is Still a Good Reporting Tool

October 2025

I now can provide individual hours of consultancy, from just £49.

You can use the time in whatever way you like – for example SQL Queries, SQL performance, Index Management etc. It’s up to you !

The consultancy takes place on Teams.  Buy in either one or two hour chunks. Once you’ve purchased, I’ll be in touch to arrange the session with you, usually delivered in 1-3 days.

I was surprised (and pleased) to recently visit two client sites and discover that both were still heavy users of SSRS. Despite the rise of newer reporting tools, it’s clear that SSRS continues to be a go-to solution for many businesses. Its flexibility, reliability, and integration with other systems make it a strong contender, even in 2024. Here are seven reasons why SSRS remains a valuable tool for reporting.

1.SSRS remains highly customizable, allowing detailed control over report design. You can even create specialized documents like letters, forms, and invoices with ease, tailoring them to specific business needs.

2. It’s widely adopted and integrates seamlessly with other Microsoft products, ensuring continuity.

3. Its scalability supports everything from small businesses to enterprise-level reporting, making it versatile across industries.

4. SSRS provides built-in security features, including integration with Active Directory (AD), ensuring secure authentication, data confidentiality, and compliance with corporate and regulatory standards.

5. Despite newer tools, SSRS offers a reliable, server-based reporting environment with the ability to automate complex data delivery workflows across various platforms.

6. It’s free to use with SQL Server, making it a cost-effective option for businesses of any size.

7. Reports can easily be automated and distributed via email, streamlining communication and reducing manual reporting efforts.

4 Sep, 2024

Important Update for Power BI Users

Updated 4th September 2024

Microsoft is phasing out Power BI Premium per capacity SKUs (P-SKUs) as it shifts towards Microsoft Fabric. Beginning January 1, 2025, customers currently using P-SKUs will need to transition to Fabric capacity, which offers greater flexibility, pay-as-you-go options, and exclusive access to Azure features.

man in black suit jacket analyzing data reports

Here are the key points to consider:

  • No immediate changes are required until your next renewal period.
  • Fabric capacity offers enhanced benefits, including smaller compute SKUs and MACC compatibility.
  • Power BI Pro and PPU services will not be impacted.

Get ready for 2025 – explore more about this transition and how it will affect your Power BI experience! See the link below for more information.

Update coming to Power BI Premium licensing

18 Aug, 2024

Basic Index Maintenance in SQL Server

Indexes are crucial in SQL Server for optimizing query performance. They work like a book’s index, enabling the SQL Server to locate data quickly without scanning the entire table. However, indexes can become fragmented or outdated over time, leading to performance degradation. This article outlines basic index maintenance tasks in SQL Server to ensure your indexes remain efficient and effective.

Understanding Index Fragmentation

Before diving into index maintenance, it’s important to understand fragmentation. Fragmentation occurs when the logical order of pages in an index does not match the physical order. This misalignment can slow down query performance because SQL Server needs to read more pages than necessary.

There are two main types of fragmentation:

  1. Internal Fragmentation: This happens when pages contain unused space due to page splits, where SQL Server divides a page into two because of data insertion.
  2. External Fragmentation: Occurs when the logical order of index pages does not match their physical order on disk.

Checking Index Fragmentation

Here’s a query to check the fragmentation:

This query returns the average fragmentation percentage of each index in the database. High fragmentation levels (generally above 30%) indicate that the index may need maintenance.

Reorganizing Indexes

Index Reorganization is a process that defragments an index by reordering the leaf-level pages to match the logical order. It’s a light, online operation that can be performed while the database is online and available to users. For more details, you can refer to Microsoft’s documentation on ALTER INDEX…REORGANIZE.

To reorganize an index, use the following command:

Or to reorganize all indexes on a table:

Reorganizing is recommended when fragmentation is between 5% and 30%.

Rebuilding Indexes

Index Rebuilding is a more intensive operation that drops and recreates the index. It removes fragmentation by rebuilding the entire index structure from scratch. This operation can be performed online or offline, but it requires more resources than reorganization. More information on this can be found in the Microsoft documentation on ALTER INDEX…REBUILD.

To rebuild an index, use the following command:

Or to rebuild all indexes on a table:

Rebuilding indexes is typically recommended when fragmentation exceeds 30%.

Automating Index Maintenance

Regular index maintenance is crucial, and automating this process can save time and ensure consistent performance. You can use SQL Server Agent to schedule regular index maintenance tasks.

Here’s a simple SQL script that can be scheduled:

This script will check the fragmentation level of each index and decide whether to reorganize or rebuild based on the fragmentation percentage.

Conclusion

Proper index maintenance is essential for optimal database performance. Regularly check your indexes for fragmentation and use the appropriate maintenance strategy—reorganize or rebuild—based on the level of fragmentation. Automating these tasks ensures your SQL Server database continues to perform efficiently without manual intervention. For more advanced techniques and options, you can refer to Microsoft’s documentation on index maintenance.

Do you need an external pair of eyes to review your SQL Server architecture?

Maybe your platform development and growth have been organic rather than going through a full enterprise architect design. You have good dev team but NO DBA input. Improvements are built by seeing the real world ‘failure’ as opposed to gaming it out at the DEV/QA stage.

I can offer a full platform evaluation, covering questions such as:

• what is your usage (potential required iops, memory, cpu, space requirements)

• Real world behaviour, usage patterns, etc.

• Routine maintenance review

• Backups / failover review

• UAT environment review

• monitoring / metrics review

Some of the questions I can investigate:

• Are you wrongly sized?

• Are you on the right platform?

• Are you making best use of what you have?

• What is the best model for your business?

• What level of DBA input do you need?

I can provide all of above via my two day “Health Check”. Click here for more info

12 Aug, 2024

Building a Data Warehouse

Why use us for Data Warehousing and Data Engineering?

We have implemented many data warehouses in SQL Server, using SSIS and SQL stored procedures for ETL / ELT.

We’re highly experienced and proficient in T-SQL, stored procedures, triggers, SQL troubleshooting , and using SSIS to create fast, accurate and secure data pipelines.

More recently, we have been involved in Microsoft Power BI and DAX to create compelling reports and dashboards for all sizes of business. As part of this, we have used SSRS to create paginated detail reports.

Other roles include presenting, providing training, and passing on SQL Server / Power BI and DAX skills to other team members.

Our Data Warehousing Experience

For ETL/ ELT, we have written many, many SSIS and SQL scripts, including advanced data transformations, database change tracking, slowly changing dimensions (SCDs) custom scripts in VB and C#, connecting to disparate OLEDB and ODBC data sources, error tracking and deployment.

What are the Challenges when designing and implementing a Data Warehouse?

Designing a data warehouse requires a solid understanding of the organization’s data requirements, as well as the data sources that are available. The following are the key considerations in designing a data warehouse:

Data Sources
Determine the types of data sources that need to be included in the data warehouse, including transactional systems, legacy systems, and cloud-based systems. The data sources need to be integrated into the data warehouse to ensure that the data is consistent, accurate, and up-to-date.

Data Modeling
Define the data structure for the data warehouse, including the relationships between tables, data elements, and attributes. The data model should also take into account the organization’s data requirements, including the types of reports and analysis that will be performed.

ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform)
Develop an ETL or ELT process to extract data from the sources, transform the data into a format that is suitable for the data warehouse, and load the data into the data warehouse. This process should be automated and regularly scheduled to ensure that the data in the data warehouse is up-to-date.

Data Quality
Ensure the data quality of the data in the data warehouse by implementing data validation and data cleansing rules. This helps to ensure that the data is accurate and consistent, and minimizes the risk of incorrect data being used for analysis and reporting.

Performance
Ensure that the data warehouse is optimized for performance, including the indexing of data, the use of summary tables, and the use of materialized views. This helps to ensure that the data can be retrieved quickly and efficiently for analysis and reporting.

Security
Implement security measures to ensure that the data in the data warehouse is protected from unauthorized access, modification, and deletion. This includes access controls, data encryption, and audit trails.

Scalability
Ensure that the data warehouse can scale to accommodate growing data volumes and increased demand for analysis and reporting. This includes the use of distributed systems, data partitioning, and load balancing.

Maintenance
Develop a maintenance plan for the data warehouse, including regular backups, data archiving, and disaster recovery planning. This helps to ensure that the data warehouse is available and reliable for analysis and reporting.

23 Jul, 2024

Why the SQL TOP Statement Does Not Work in Oracle and What to Use Instead

close up of a man creating a mindmap on a whiteboard

Why the SQL TOP Statement Does Not Work in Oracle and What to Use Instead
Structured Query Language (SQL) is a fundamental tool for managing and querying relational databases. However, different database management systems (DBMS) have unique implementations and syntax variations. One such difference is the use of the TOP clause, which is prevalent in SQL Server but absent in Oracle Database. This article will explain why the TOP statement is not supported in Oracle and what alternatives can be used to achieve similar functionality.

The SQL TOP Statement: An Overview
In SQL Server, the TOP clause is used to limit the number of rows returned by a query. This is particularly useful for handling large datasets when only a subset of data is needed. The basic syntax for the TOP clause in SQL Server is:

SELECT TOP n * FROM table_name;

This command retrieves the first n rows from the specified table. The TOP clause is straightforward and convenient in SQL Server. However, this specific syntax is not recognized by Oracle Database.

Why Oracle Does Not Support the SQL TOP Statement
Oracle Database follows a different set of SQL standards and conventions. There are several reasons for the absence of the TOP clause in Oracle:

extreme close up photo of codes on screen

Adherence to SQL Standards: Oracle adheres closely to ANSI and ISO SQL standards, which do not include the TOP clause. The TOP clause is a proprietary feature introduced by Microsoft in SQL Server, and Oracle has chosen not to adopt this non-standard extension.

Alternative Mechanisms: Oracle offers other methods to achieve similar results as the TOP clause. These methods include the use of the ROWNUM pseudo-column, the FETCH FIRST clause (introduced in Oracle 12c), and the ROW_NUMBER() analytic function, each providing more standardized and flexible ways to limit query results.

Alternatives to the TOP Clause in Oracle
Using ROWNUM

The ROWNUM pseudo-column is an effective way to limit the number of rows returned by a query in Oracle. It assigns a unique number to each row in the result set, starting with 1. The syntax for using ROWNUM is:

SELECT * FROM table_name WHERE ROWNUM <= n
This query retrieves the first n rows from the table. However, it is important to note that ROWNUM is assigned before the ORDER BY clause is applied, which can affect the result if sorting is needed.

Using FETCH FIRST (Oracle 12c and Later)

Oracle 12c introduced the FETCH FIRST clause, which provides a more intuitive and standard-compliant way to limit rows. The syntax is:

SELECT * FROM table_name ORDER BY column_name FETCH FIRST n ROWS ONLY
This query returns the first n rows based on the specified order. The FETCH FIRST clause is straightforward and aligns with SQL standards, making it a preferred method for limiting rows in newer Oracle versions.

Using ROW_NUMBER() Analytic Function

The ROW_NUMBER() analytic function is another powerful tool for limiting rows in Oracle. It assigns a unique number to each row within a result set, allowing for precise control over row selection. The syntax is:

SELECT * FROM (
SELECT column_name, ROW_NUMBER() OVER (ORDER BY column_name) AS rn
FROM table_name
) WHERE rn <= n;

This query retrieves the first n rows after ordering the results by the specified column. The ROW_NUMBER() function offers great flexibility, especially for complex queries involving multiple columns and partitions.

Conclusion – Why the TOP statement is not supported in Oracle
The absence of the TOP clause in Oracle is due to its commitment to SQL standards and its design philosophy. However, Oracle provides robust alternatives to achieve similar functionality. The ROWNUM pseudo-column, the FETCH FIRST clause, and the ROW_NUMBER() analytic function each offer unique advantages, allowing users to efficiently limit the number of rows returned by their queries. By understanding and utilizing these alternatives, Oracle users can effectively manage large datasets and tailor their queries to meet specific needs.