Performance problem upgrading to Server 2012

When one of our customers recently upgraded a file server from Windows Server 2008 to Server 2012, their customers complained of significantly increased end-of-day processing times—up to three times slower with Server 2012 than the previous system.

The system used terminal services to allow remote administration and running of such tasks as day-end processing. There were no general interactive users. Running the same software on a Windows 8 client system resulted in better performance, all other things being equal (disk, network, CPU, etc.). But a brand new Server 2012 R2 system with just the basic GUI role, performed twice as fast as the Server 2008 system. However, as roles were added, the customer noticed that the RDS role caused the slowdown.

Since Server 2008 R2 was introduced, RDS (Remote Desktop Services, formerly known as Terminal Services) has included a feature called Fairshare CPU Scheduling. With this feature, if more than one user is logged on a system, processing time is limited for a given user based on the number of sessions and their loads (see Remote Desktop Session Host on Microsoft TechNet). This feature is enabled by default and can cause performance problems if more than one user is logged on the system. With Server 2012, two more “fairshare” options were added: Disk Fairshare and Network Fairshare (see What’s New in Remote Desktop Services in Windows Server 2012 on TechNet). These features are enabled by default and can come into play when only one user is logged on. And these options proved to be the cause of the slowdown for our customer. They limited I/O for the single logged-on user (or scheduled task), though the day-end processing was always I/O bound. We were able to remove this bottleneck by either disabling the RDS role or turning off fairshare options.

In summary, if a system is used for file sharing services only (no interactive users), use remote administrative mode and disable the RDS role. If the RDS role must be enabled, consider turning off Disk Fairshare if the server runs disk-intensive software (such as database or I/O-intensive programs), and turn off Network Fairshare if the server has services (such as Microsoft SQL Server or xfServer) to prevent client access from being throttled. For information on turning off Disk Fairshare and Network Fairshare, see Win32 TerminalServiceSetting Class on Microsoft Developer Network (MSDN).

More articles related to this issue:

·         Resource Sharing in Windows Remote Desktop Services
·         Roles, Role Services, and Features

Share
Posted on February 28, 2014 at 12:06 pm by · Permalink · Leave a comment
In: Uncategorized

Are you ready for the demise of Windows XP?

As Microsoft has stated and many other companies have echoed (see links below), the end of life for Windows XP is April 2014. Yep, in just 7 months, Microsoft, anti-virus vendors, and many software vendors (including Synergex) will no longer provide support or security patches for IE and XP. And the end of life for Windows Server 2003 will follow soon after.

Why is this so important? And, if what you’re using isn’t broken, why fix it?

Let’s consider, for example, a doctor’s, dentist’s, or optician’s office running Windows XP and almost certainly Internet-connected – in fact, probably using an Internet-based application. All it takes is an infected web site, a Google search gone astray, or a mistyped URL, and the PC is infected – INCLUDING all of the office’s confidential medical data. Plus, most offices allow their workers to browse the Internet at some point in the day – to catch up on e-mails and IM, conduct searches, surf eBay, etc. If the office is running XP after 2014, it is almost certain that it will be open to infection by a rootkit or other malicious software, because the malware authors will take advantage of every vulnerability not fixed by Microsoft due to the end of life. Add the fact that the antivirus vendors will also reduce or eliminate support, and you have a mass Bot-like infection waiting to happen. Once a system gets a rootkit, it’s nigh on impossible to remove it without a wipe clean. To further complicate things, it usually takes a boot-time scan from a different device to detect many of these infections.

Further, while Windows XP had an impressive 13-year run, it is far less secure than later operating systems, and the hardware it runs on in many cases is also at the end of its life.

If you or your customers are running Windows XP or Server 2003, it’s time to upgrade to a more modern, more secure operating system like Windows 7. At least you can rest assured that Microsoft’s monthly Patch Tuesday will provide protection with security fixes in conjunction with your anti-virus vendors to protect sensitive information and keep the business running.

https://community.mcafee.com/community/security/blog/2013/06/07/xp-end-of-life–08-april-2014

https://community.mcafee.com/community/security/blog/2013/06/07/xp-end-of-life–08-april-2014

 http://blogs.windows.com/windows/b/springboard/archive/2013/04/08/365-days-remaining-until-xp-end-of-support-the-countdown-begins.aspx

http://blogs.technet.com/b/mspfe/archive/2013/04/29/windows-server-2003-rapidly-approaches-end-of-life-watch-out-for-performance-bottlenecks.aspx

http://www.microsoft.com/en-us/windows/endofsupport.aspx

 

 

Share
Posted on August 28, 2013 at 10:28 am by · Permalink · Leave a comment
In: Uncategorized

Live from Bell Harbor

Today I’m excited to be blogging from the TV studio at the Bell Harbor Convention Center in Seattle for the live Visual Studio 2012 launch.

Since the Build conference last September, Synergex has been working closely with the Microsoft development teams ensuring that Synergy/DE works seamlessly with all the new exciting Microsoft technologies being released this fall–Visual Studio 2012, Windows 8, and Synergy for Windows Store applications on both ARM and Intel processors. Our team has made several visits up to Redmond to work directly with Microsoft engineers to enhance Visual Studio, Windows 8, and Synergy/DE.

You can download 10.0.3 of Synergy/DE today to start using the latest Visual Studio 2012 features, including the new async and await functionality demonstrated by Microsoft at the visual studio launch event.

I’m also incredibly pleased to talk about our new KitaroDB NoSQL database for Windows Store applications that we are releasing today. Built on our solid, high performance Synergy DBMS product, KitaroDB is the first on disk NoSQL database in the Windows 8 sandbox working with X86, X64 and ARM processors.

We have a Netflix sample application that uses KitaroDB, and in the next few weeks will be launching a great new Windows Store application that takes advantage of KitaroDB for its local persistent storage.

See www.kitarodb.com for more details.

 

Share
Posted on September 12, 2012 at 11:43 am by · Permalink · Leave a comment
In: Uncategorized

Synergy/DE to sim-ship with Visual Studio 11 Beta

We have been working hard at Synergex since Microsoft’s BUILD conference last Fall ensuring that Synergy works well with the soon-to-be-released Visual Studio 11 and .NET Framework 4.5 beta.

I am pleased to tell you that we will sim-ship with Microsoft on their announced February 29 beta release date with a released version of Synergy/DE 9.5.3a to allow those of you interested to test drive the new release. Synergex customers can expect to see performance improvements in editing – especially when using large Synergy Language source files – among other improvements.

You can find more information on these Microsoft blogs:

Visual Studio Blog
Jason Zanders Weblog

Share
Posted on February 27, 2012 at 10:43 am by · Permalink · Leave a comment
In: Uncategorized

Mapped drives and Synergy

Some recent posts on our synergy-l listserv made me realize there are still some misperceptions about Microsoft network shares (mapped drives), so I thought I would address those here.

Microsoft designed network shares for single-user access to shared resources such as Word documents. The locking and caching algorithms they use assume that a local cache is desirable since multiple users are unlikely to do concurrent updates (though the algorithms try to cope with this situation). Of course the use of network shares has grown to much more than single-user systems. Many of our customers have used them and are still using them, and many of these customers have unfortunately encountered performance and file corruption issues. Most of these issues are associated with concurrent updates and cache flushing, and using mapped drives (as opposed to UNC paths) seems to exacerbate the problem. With older Windows versions–prior to Vista and Server 2008–you can mitigate file corruption issues by disabling oplocks on the server (which disables the local caching). (Syncksys, a utility we used to check settings on these older Windows systems, always checked for this.) Unfortunately, you can’t disable oplocks with SMB2 redirectors on the newer Windows systems.

Because of the number of issues our customers have encountered, Synergex can provide only very limited support for Synergy database access through network shares. (See the Synergex KnowledgeBase article listed below.) We have traced these problems to errors in the Microsoft SMB (mrxsmb10.sys, mrxsmb20.sys, mrxsmb.sys) and Redirector (rdbss.sys, srv.sys) subsystems.  We find that these problems get worse with network shares over a WAN and with multi-user access. It is fair to say that Microsoft has fixed many problems with Windows XP and Server 2003 over the years, but the problems have resurfaced with newer Windows operating systems. And now many organizations are using Windows Vista and Windows 7 machines as clients (alongside their Windows XP clients) to Server 2003 or Server 2008 servers, introducing newer operating systems with mapped drive subsystems that have regressed in functionality and performance.

Here is a great link to a Microsoft article for the latest hot fixes available for the network share subsystem: http://support.microsoft.com/kb/2473205 (for Windows Vista onwards)

http://blogs.technet.com/b/yongrhee/archive/2011/06/12/list-of-post-sp3-related-hotfixes-for-windows-xp-sp3.aspx (Windows XP)

Synergex has published the following KnowledgeBase article on the subject of network shares: https://resourcecenter.synergex.com/devres/kb-details.aspx?id=1811

This same information (from the KB article) was also published as an article in the 26 August 2010 edition of Synergy-E-News: http://www.synergex.com/ecards/20100826.html#mapped

We recently (and mistakenly) used a mapped drive internally for a project, and the ISAM files for the project were continually corrupted. (We have logged a premier support call to Microsoft for this issue.) We fixed the corruption with the recommended solution, xfServer, which not only solved the issue but improved performance.

File corruption issues aside, in most cases xfServer will significantly outperform a mapped drive in commercial situations with multi-user access when it’s set up to correctly use SCSCOMPR and SCSPREFETCH in conjunction with correctly opening files for input when just reading data sequentially. The known cases where xfServer is slower than a mapped drive is when a file is opened for update with a single user or exclusive access, or for output when the file is not large and/or the records are small (or ISAM compression makes them small), allowing the redirector to cache the data blocks locally. If oplocks are turned off on Server 2003, as recommended, this caching is disabled and performance degrades, though reliability increases. We are investigating an xfServer performance improvement that would provide comparable or better performance than a mapped drive in additional scenarios by allowing users to enable the cache on stores and writes to a file opened for output, append, or exclusive access.

We have provided a test utility in CodeExchange, IsamBenchMark, to help you test performance with your own physical files and network.  Using this utility, we saw the results described below.

The following tests were made on a Windows 7 machine connected to another Windows 7 machine over a 1 GB network. The machines each had 6 GB of memory and Core 2 3GHz CPUs. The Windows operating system cache was flushed using the Sysinternals CacheSet utility before each run on the server machine. Neither machine had an antivirus program running or any other software that accessed the network. Both machines were connected on the same physical switch.

Two files were used: one with eight keys and 128-byte records and the other with eight keys and 512-byte records. Each file was filled with 100,000 records during the test, and the files were created without ISAM file compression. We used Task Manager’s networking tab to generate the diagrams below (though we’ve added red vertical lines in one diagram).

Diagram 1 shows network overhead for our first test, which used xfServer to access the file with 512-byte records. The file was first accessed without compression and then with compression (i.e., with SCSCOMPR set). 

 

Using 4-5% of a gig link is equivalent to 50% of a 100 MB Ethernet, so the performance gained by setting compression for xfServer would be even greater on a slower Ethernet or WAN.

Diagram 2 shows network overhead for our second test, which accessed the 512-byte-record file in three ways:

The test program stored 100,000 records, re-opened and used random read for 100,000 records, re-opened and used reads for 100,000 records. Note the change in scale from the previous diagram, and note the overhead of the stores is high (with the flush-on-close peaking with the mapped drive). The next segment is the random read, and then the next peak is the reads. Also notice the setup with SCSCOMPR and SCSPREFETCH makes the reads almost as fast as a local disk access and far faster than the mapped drive.

 

If this had occurred over a slower network, the stores would be slower on a mapped drive than xfServer and the SCSCOMPR form would outperform all other methods, given records with some degree of compression. If you have an ISAM file with ISAM compression, isutl –v can give you an idea of how much compression of the data can help with xfServer.

Diagram 3 shows network overhead when accessing the file with 128-byte records in the same three ways (mapped drive, xfServer without compression, and then xfServer with compression) and with the same program.

 

Diagram 4 shows the network overhead for… Well, there is no diagram 4. Our final test used two systems each with the remote 512-byte-record file opened on the network share. One just had the file open. The other ran the test. This setup used 50 MB of bandwidth constantly for several minutes, so the diagram would run off the page and would show only a pegged green line. Instead, here’s a table that summarizes our findings. The last row documents the results for the multi-user mapped drive setup and illustrates how much worse things get when using a mapped drive in a multi-user environment. Bandwidth is quickly overloaded, and it becomes difficult and time consuming for the network to accommodate large groups of packets. As pointed out earlier, these tests used a physical switch. xfServer becomes even more important when the network involves a hub.

 

Store
(in seconds)

Random read
 (in seconds)

Reads
(in seconds)

Mapped drive 1user

21

7

4

xfServer no SCS_COMPR

24

24

2

xfServer SCS_COMPR

21

20

.90

Mapped drive 2 user

882

193

32

One final thing to be gleaned from this is that running programs that transfer large amounts of data across a network, programs such as month-end or day-end processing or reports, can quickly overload a network. Do what you can on the server by using xfServerPlus, the new Synergy Language Select class, or by running the program on the server.

If your xfServer system is not performing in line with the results described above, I encourage you to contact our Developer Support team so we can assist you in optimizing your system. If you have questions or would like more information on this topic, contact Synergy/DE Developer Support or your account manager.

On this topic, a customer recently reported an ELOAD/System error 64, “The specified network name is no longer available.” When they changed the UNC path to use the IP address rather than the name of the machine, the problem appeared to be resolved. This would suggest poor DNS server performance lookups was causing the error. It appeared the problem was disconnecting client machines on the network, as well as the recovery mechanism used by the re-director after such a failure. In the recovery mechanism, it appears DNS must also work. The problem with DNS is that Microsoft caches “Failures” of DNS, so one failure can cause other issues.

 

Share
Posted on January 31, 2011 at 4:27 am by · Permalink · Leave a comment
In: Uncategorized

Preventing performance issues related to antivirus software

We get quite a number of support calls with either performance or system-down issues related to installation security suites, mostly related to antivirus software. In most cases the culprit ends up being the incorrect setup of the antivirus software.

Let’s first consider what antivirus software has to do and how it ships by default.

In today’s cat and mouse game, the security software vendors are trying to keep up with all of the malware generators that pop up daily. A typical antivirus signature file contains over 80 Mb of compressed signatures, and  the major players like Trend, McAphee, Symantec, VIPRE, and Kaspersky provide multiple updates to signatures daily. The problem then is deciding what to scan and when to scan—you obviously don’t want to miss an infected file that’s downloaded between updates to the scan databases, but you also don’t want to bog down your system unnecessarily. By default, most security products scan all files once daily, and use real time scanning to scan infectable files on both read and write. Some even default to continuously scanning all files. Though each vendor has different terminology for “scan on read” and “scan on write” (in fact some confuse read as write and write as read), “scan on read” effectively means scan every time a file is opened and “scan on write” effectively means only scan when a file opened for write is closed. Some vendors even have a flag to scan all files on close. And some products, like VIPRE, don’t have any concept of scan on write only.

Now that we know how these products handle file access, let’s consider some scenarios on live systems.

Scenario 1 – When “scan all files” is set

In this scenario, every file may be scanned for a virus on open and close, regardless of writeability. Consider scanning a .vhd for a virtual image, or a Synergy DBMS file every time a user opens or closes the file. (Both file types are usually opened even for write.) The same would even apply to every file accessed in your SQL Server and Oracle databases, and to all of your Synergy .dbr and .elb files.  The implications to your system performance are obvious.

Scenario 2 – Scan only infectable files

In this scenario, infectable files may be scanned on open and close. By default in most vendors’ products, this includes Synergy .ism files as well as .vhd files.  This scenario as well has a significant impact on your system performance due to the overhead of scanning large files.

Scenario 3 – Scan only infectable files on Write

In this case, .exe and .dll files are only scanned when updated, but a .vhd and a Synergy .ISM file would also be scanned on close because they are usually opened for write. This technique might be good for a general purpose file server of Word documents, for example, but not for a data server.

As you can see, without some degree of tuning, virus scanning products can have disastrous effects on system performance.  (You can use the Sysinternals Process Monitor to see the overhead your virus scanning tool is causing.)

For obvious reasons, scanning of files takes place at a high priority in the kernel mode of the operating system. This usually impacts both system time and user processing time. Additionally, many vendors now use the VISTA filter manager, and I previously bloggedabout the performance penalties of such hooking on Vista and Server 2008. Luckily the overhead is significantly reduced in Server 2008 R2 and Windows 7.

In our recent internal use of Microsoft’s SharePoint server, we were seeing dramatic performance problems when installing and uninstalling software, and even when the IIS SharePoint services (which are .NET-based) were loading and jitting. By correctly disabling the “scan on open for read” options, the performance significantly improved. We also tried the VIPRE product, and this improved performance even further – however, for a very specific reason. VIPRE, as stated previously, scans all files on open and close, and gains its performance edge because it recognizes signed, read-only EXE/DLL files and caches them if they have not changed so that the re-scan is not required. This is what gives it a seemingly large performance gain. However, once you throw in files that are not signed, its scan requires significantly more resources because you can’t disable the “scan on read” functionality (which would require a scan of such products as Diskeeper moving around files). Additionally, VIPRE also scans (but does not report issues with) other excluded files, so the overhead is pretty much permanent for unversioned files like Synergy DBMS files.

The key is, after you have a clean full-file scan on a system, set scan on write only, scan infectable files, and make sure that the file extensions of your databases and VHD files are set to no scan. And, due to its inability to scan on read, we do not recommend VIPRE for use with Synergy/DE installations.

(Of course I’m providing this information for information purposes only, and it is up to each company to set its security policies.)

Share
Posted on August 10, 2009 at 7:19 pm by · Permalink · Leave a comment
In: Uncategorized

Protecting the Spread of Security Infections in Places You Might Not Think About

Several weeks ago we had a new Ikon color printer installed. It has a separate Kodak PC running the printer drivers and color matching software. I noticed that it was Internet connected and that software updates were not being applied.

When we contacted the manufacturer, we were told the PC was an embedded XP device and did not need the XP SP3 nor the security patches. We immediately disabled the Internet connection (embedded XP devices are susceptible to viruses too)—but that’s not really good enough. To date the manufacturer still has not authorized XP SP3 nor the regular monthly security patches, yet all printed documents go through this machine and users can go to the console and copy documents from a USB drive or internal network locations. Once infected with a virus or worm — or even a botnet — we’re SOL, because the manufacturer of the device doesn’t support installing anti-virus software, and any such changes would require an engineer to reload the system from scratch.

The problems are not just with Microsoft. Adobe has had to patch its Flash Player and Reader already this year, and another Reader patch is due. How many of us keep the Adobe Reader and Flash players up to date?

Why is this such a big issue? Well, the problem is that these embedded XP systems can get infected. One example is the Conficker worm. In most cases Conficker is benign until it is woken up by its creators. Users don’t even know they have it, may not even have Internet access (or may not know that they do), and/or may have been infected internally. The only way to detect these kinds of issues other than with a virus scanner is to look at network traffic going back to “phone home.” I think an article from the San Jose Mercury News illustrates the problem well. Even if you have a patch available to avoid infecting a machine, what if every patch and/or daily antivirus update required a 90-day approval process?

My recommendation is that you get with the manufacturers of all embedded XP devices that are connected to your network and get the regular updates and XP SP3, and ensure that Internet Explorer is disabled in such a way that the machine’s users cannot re-enable it. And also be sure to keep your Adobe Reader, Flash players, and similar products up-to-date.

Share
Posted on May 6, 2009 at 10:20 pm by · Permalink · Leave a comment
In: Uncategorized

The Vista performance saga – final chapter

In January we finally determined why file I/O on Vista and Server 2008 disks is slower than on Windows 2003. In a previous blog post I stated that

“The performance problem on disks that have been hooked by applications that use the new Vista/Server2008 filter manager infrastructure – can cause CPU overheads of at least 40% on all I/O operations including cached I/O and locks reducing throughput.”

So what applications use the new filter manager? Well UAC on system disks using the UAFv.sys file system re-director use the filter manager, and many current antivirus applications use the filter manager on all the disks where they are set to perform real-time scanning.

In Vista the initial hit is high to register “any” application to use the filter manager on a volume and then rises even higher for every operation type hooked. The UAC file system re-director – that ensures that writes to Windows-protected directories like windows\system32 and \program files\ are re-directed to the user’s local path, which the user does have access to. If you use Yahoo Messenger on a Vista system, you will see it has this problem because it always assumes it can write to program files. Now the reason that the uafs.sys file system redirector hooks every file I/O operation on the system disk is because it tries to cache these re-directed operations to avoid creating and writing the temporary re-directed file to disk ever; however this now causes the performance issue on Vista unless file system redirection is turned off by disabling the service (which may cause applications like Yahoo Messenger to fail unless UAC is also turned off).

I had turned uafv.sys off on my Vista system – however performance traces in Intel’s VTUNE performance advisor showed that I was still getting performance degradation due to the filter manager when running our test suites. It turns out that the latest Trend Micro antivirus engine is following Microsoft’s best practices and using the new filter manager on all disks – so the previous work-around of using a non system disk did not work on my machine.

In my dialogue with Microsoft, they indicated that they did not expect the data drives of an internal file server to always need to have an antivirus scan (by this I don’t mean a file server in the Word document sense, rather a dedicated database server that has no internet access), so the overheads related to the virus scanner would not apply to non system disks – and even if a virus scanner was installed that it would only be set to scan the system disk in real-time mode.

The good news is that Windows 7/Server 2008 R2 have significantly improved this situation. Though there is some overhead for the initial attach to the filter manager, additional attaches cause much less overhead, and the overall figure is far better than Vista. Microsoft will continue to look at this area during the release cycle of Server 2008 because of the impact it has when virus scanners are using the filter manager and set to real-time scan all disks on a system.

Share
Posted on March 13, 2009 at 8:52 pm by · Permalink · Leave a comment
In: Uncategorized

Microsoft’s ADO.NET Entity Framework

Over the years, Microsoft has provided many different ways to access data–ODBC, DAO, ADO, and ADO.NET (with data sets and data readers). The next data access technology is the Entity Framework with the 3.5 SP1 version of ADO.NET. Synergex has provided access to all of these technologies through the baseline ADO.NET 2.0 with its xfODBC driver. Synergex has developed its own ADO.NET 3.5 provider with the extended capabilities needed to interoperate with the Entity Framework and the Entity designers in Visual Studio 2008 SP1.

Microsoft views the Entity Framework as the future of all of its data access technologies – and products like SQL Server, Office, and the Visual Studio designers are all either upgraded or being upgraded to require access to databases via the Entity Framework.

Here is how Microsoft describes the ADO.NET Entity Framework:

“Database development with the .NET framework has not changed a lot since its first release. Many of us usually start by designing our database tables and their relationships and then creating classes in our application to emulate them as closely as possible in a set of Business Classes or (false) "Entity" Classes, and then working with them in our ADO.NET code. However, this process has always been an approximation and has involved a lot of groundwork.

This is where the ADO.NET Entity Framework comes in; it allows you to deal with the (true) entities represented in the database in your application code by abstracting the groundwork and maintenance code work away from you. A very crude description of the ADO.NET Entity Framework would be that It allows you to deal with database concepts in your code.“

The ADO.NET Entity Framework is designed to enable developers to create data access applications by programming against a conceptual application model instead of programming directly against a relational storage schema. The goal is to decrease the amount of code and maintenance required for data-oriented applications. Entity Framework applications provide the following benefits:

If you are interested in beta testing our new Entity Framework capabilities, please contactSynergy/DE Developer Support.

For more information and a tutorial of the Entity Framework, see these links:

http://msdn.microsoft.com/en-us/library/aa697427(VS.80).aspx

http://www.codeguru.com/csharp/csharp/cs_linq/article.php/c15489/#more

Share
Posted on January 29, 2009 at 4:36 pm by · Permalink · Leave a comment
In: Uncategorized

Upcoming “experimental feature” will help you detect use of uninitialized memory

We are continually reviewing customer applications to assist with support/development issues, and in doing so often come up with ideas to help customers facilitate debugging problems they may encounter. We use a product from Compuware called DevPartner Studio to help us track down “C” variable access problems in the Synergy components that sometimes cause instability in the runtime. I like to run customer applications with a special runtime that is built with DevPartner, which allows us to check boundary conditions while running “real” customer applications. DevPartner enables us to check use of memory already freed (called dangling pointers) and access to memory before we have written to it (a common cause of symptoms that move around depending on memory and time of day).

One recent application we saw was accessing uninitialized memory before writing to it. As we tracked this down, , we realized the customer was using stack records and %MEMPROC memory that had never been written to. In certain cases this would cause random results, and in this particular case, it was causing the customer’s application to fail when run under the DevPartner tool because the memory was now a consistent but unexpected value.

We decided as a test to add some support in Synergy/DE to see if the Synergy runtime could also detect this use of uninitialized memory with a minimal overhead when running in debug. It turns out that we can do similar checking for assignment statements and “if” tests, and we can differentiate between stack memory and MEM_PROC memory. Using this functionality also enables a developer to break in the debugger after the statement that uses this random memory.

We are considering adding this new debugging functionality to a future release of Synergy/DE. However, so that we can get this useful tool into your hands sooner, we are planning to include it as an “experimental feature” in an upcoming patch.

“Experimental features” are features that are under evaluation. They are for early adopters to use and provide us with feedback on. They will be supported, but they may be modified or even removed in subsequent releases.

So look for this new experimental debugging feature in an upcoming patch and consider trying it out. Like the recent feature we added to detect mismatched global data-section sizes (which can cause runtime crashes), this feature to detect uninitialized memory continues our aim to add debug-time detection of coding errors to assist you in producing more reliable applications.

Share
Posted on December 10, 2008 at 7:05 pm by · Permalink · Leave a comment
In: Uncategorized