>
Tuesday, January 26, 2010
|
For static content such as CSS, JavaScript, and image files, it is best practice to serve these items from a separate domain from the main content. Even if this secondary domain points to the same web farm initially, it opens up the possibility of serving the content from other servers at a later date.
Other benefits:
- Static content domain can be cookie-free.
- Cache settings can be adjusted specifically for the static content, based on host headers.
- Dedicated hardware could be set up to serve the static content.
In dev and QA environments, of course, it may not be possible to point to an external production domain. The CSS/JS/etc files may need to point to local content during dev/QA. Consequently, any solution should take into account configuration differences across environments.
A couple solutions to consider:
- Instrument IMG and SCRIPT links with a method call that substitutes the domain.
Drawback: <%= %> block in ASPX page means that it cannot be declared CompileMode="never", which is needed for CMS-generated pages.
Will a <%$ %> block work?
- Create an HttpModule to comb through the page output and substitute the domain in the proper locations.
- Implement an .ascx control to output SCRIPT and CSS references.
Drawback: Will not address the issue of inline IMG tags. - Use a dynamic #include file with some logic embedded.
Drawback: Same as #3. Also, CMS user could possibly mangle code directives in #include. Finally, it is questionable if this will even work reliably with ASPX.
More to come...
>
Tuesday, January 12, 2010
|
May be useful for backing up blogs:
http://www.codeplex.com/bloggerbackup
May also consider an import to a local Wordpress instance.
>
Tuesday, January 12, 2010
|
Problem: an ASP.NET website is connected to a CMS which permits users to modify .resx files.
Modifying .resx resource files during runtime has the unfortunate consequence of causing the App Domain to reload. This can really kill your scalability since the HttpRuntime.Cache (among other things) is blown away when App Domain is reloaded.
I reviewed some options for custom resource providers. First, I reviewed this article:
http://www.west-wind.com/presentations/wwDbResourceProvider/
The implementation here uses a database back-end for storage of resources. With this solution, obviously there is no problem of ASP.NET monitoring App_GlobalResources and App_LocalResources files and invoking an App Domain reload.
This implementation uses on a private IDictionary member variable in each Resource Provider instance. This IDictionary’s keys correspond with all cultures supported in the resource data. The values in the IDictionary are themselves IDictionary object which provide all key-value pairs within each culture. These key-value pairs are the resource strings.
This example is easily adapted to read from .resx files instead of a database. (Note that the resource files would need to be placed in a directory that isn’t monitored by ASP.NET for changes.)
The issues I see with this implementation are as follows:
- Since the resource values are cached in member variables, they don’t change when the data source changes. The example code provides a ClearResourceCache() method to clear out the member variable. However, this method is not called automatically when specific resource are made.
Depending on application requirements, this may not be problematic; it could be OK to wait for the next App Domain restart for the resources to update. - There may be multi-threading issues with this code. Note this line of code:
this._resourceCache[cultureName] = Resources;
Is it worth it to place a lock around the code that populates this member variable?
UPDATE -- locking has been added to the latest code download (not reflected in the article itself).
Another implementation that I reviewed:
http://www.onpreinit.com/2009/06/updatable-aspnet-resx-resource-provider.html
This implementation is more geared to my specific problem – making modifications to .resx files without causing App Domain restarts.
In this implementation, the resources are not stored in member variables (contrary to the above implementation). Instead, they are stored in HttpRuntime.Cache. This has the benefit that we can make use of CacheDependency to monitor the .resx files for changes. This makes the files updateable, with the changes immediately reflected in the resource provider.
Still, there are drawbacks:
- In this implementation, the cached object is not an IDictionary. Instead, it is an IResourceReader object, which comes from a ResxResourceReader which is leveraged to read the .resx files. Consequently, every call to GetObject() actually loops through the IResourceReader object. So in the worse-case, the code iterates through every resource key looking for the item. This might be a performance issue for large resource files.
- To get around this limitation, some additional code would need to be written to loop through the resource reader and store it in an IDictionary object.
Some considerations:
- How often will resources be updated by users?
- How quickly do resource updates need to be reflected in the web UI?
>
Tuesday, January 05, 2010
|
Chapter 3 covers Mission-Critical (aka Business Critical) environments.
The five “key aspects” of a Business Critical environment are:
- High Availability
- Monitoring
- Handling release cycles
- Controlling Complexity
- Performance optimization
Monitoring
In the section on monitoring, the author mentions that SNMP (simple network management protocol) is the industry standard for monitoring of commercially available products.
If a product isn’t SNMP-capable, an organization may choose to export its monitoring info via another means, or author a new Management Information Base (MIB) to expose the info over SNMP.
While SNMP provides statistics for bottom-up monitoring, the other monitors to consider are Business metrics. For example, how many widgets are sold per second. This is ultimately what will matter to the customer, so it is important to track these as well.
Handling Release Cycles
A key point here:
The best solutions technically is not always the right solution for the business.
While it makes technical sense to have full Dev, QA, UAT, and Production environments, it may not be feasible for the business. For example, it may be worth the risk of downtime to skimp on a staging environment that doesn’t match production specs.
Controlling Complexity
For me, the key takeaway from this discussion were these points:
Independent architectural component added to a system complicate it linearly.
Dependent architecture components added complicate it exponentially.
Is a particular component adding dependencies? This is a key question to ask oneself when an architecture component is being considered, because, ultimately, the organization will need to support the resulting architecture.
>
Sunday, January 03, 2010
|
A few interesting points in chapter 1:
- The only "true" scalability is horizontal, meaning that system capacity is increased by adding more of the same hardware or software. (Scale out, not up)
- Scaling "vertically" is just adding horsepower (storage, CPU, etc) to an existing machine.
- One example where horizontal scaling is difficult is large ACID-compliant databases. The current practice to scale such a database is to place the service on a very powerful machine.
- Services such as DNS, FTP, and static web content are horizontally scalable by nature.
- Scaling down may be necessary at times: Consider a startup whose infrastructure needs to be scaled back to reduce costs.
Key attributes of an architect:
- Seeing beyond business requirements and predicting future demands.
- Experience
Elements to balance in architecture:
- Cost
- Complexity
- Maintainability
- Implementation latency
Chapter 2, Principles for Avoiding Failure...
The author suggests planning that a system will run on "yesterday's" commodity hardware. If your application needs the latest and greatest hardware, you are veering towards vertical scaling.
In Code Complete, McConnell describes the architect's key role in software construction as managing complexity. In this book, the author identifies "uncontrolled change" as the biggest enemy of software systems. He describes two opposing forces in software projects:
- The business, who wants technical innovation "on-demand" (interestingly, he notes that this is indeed available in the hardware world, but not in the realm of application development, architecture, and adaptation of business rules).
- The technology side, who want complete requirements for everything upfront.
Three "low-hanging fruits" for controlling change in projects:
- Source control
- Having a plan for each of the following (assuming configurations A and B):
- Push A to B
- Revert B to A
- Bare-metal restore of A
- Testing A to B push and B to A revert
- Unit testing, preferably across the entire system
As in the conclusion of chapter 1, the ultimate asset is "Good Design" which is attained by an experienced architect who can predict the future :)
>
Saturday, January 02, 2010
|
Here are a few books I that I need to get through. Some of these I've partially read -- I plan to do a more focused reading in the upcoming months.
Microsoft .NET: Architecting Applications for the Enterprise
I've already taken a couple ideas from this book, specifically regarding UML.
Code Complete, 2nd edition
I've read a bit of this one already - great book that deserves its acclaim.
Pro ASP.NET MVC Framework
Work has not given me an opportunity to use MVC.NET yet -- looking forward to getting into this. Has quite a bit of TDD coverage as well.
Pro ASP.NET 3.5 in C# 2008
Hope to fill out my ASP.NET knowledge. The section on Asynchronous pages has helped a lot.
Head First Design Patterns
This has been out for a while - now I need to read it.
Head First Software Development
More of a project management focus
Building a Web 2.0 Portal with ASP.NET 3.5
I was intrigued by this book after reading the author's posts on codeproject.com.
The appendix on perfmon counters has been a good resource for me.
Scalable Internet Architectures
More of a PHP focus.
RESTful .NET
I may try a general WCF book first before launching into this one.
Information Architecture
ASP.NET AJAX in Action
I got this one before discovering [the amazing] jQuery - now I'm thinking I should have got a jQuery-focused book instead.
Mind Performance Hacks
Secrets of the JavaScript Ninja
I got a pre-release of this after reading about it on John Resig's blog.
Ultra Fast ASP.NET
Beginning XML with C# 2008 - From Novice to Professional
>
Friday, January 01, 2010
|