The following white papers are available in the current website:
Practical approach to designing controllers using FPGA with embedded processors
There is a real danger that the most advanced FPGAs with embedded hard processors are facing extinction. The reasons can be traced to difficulties associated with designing, debugging and support of complex systems. The wrong positioning and a lack of dedicated, specialized tools greatly contributed to this outcome.
In this paper, we describe alternative tools, methodologies and procedures we used to design FC and SAS controllers and protocol converters, which we successfully implemented utilizing FPGAs with embedded processors.
The recommended approach is to relegate the embedded processor to the role of a micro-programmed controller of hardware functions; implement dedicated tools for ease of debugging and support; and finally to avoid using encrypted IP blocks as much as possible.
The design and debugging of Systems on Chip requires dedicated tools, tailored to the target project. It also requires plenty of talent, experience, imagination and discipline on the part of the designers. The thinking "out of the box" is a rule, not an exception. Otherwise it is very easy to get bogged down and to inordinately stretch the development period (if there is enough money to support it). It also requires plenty of patience, since, due to the inherent complexity, projects like these are very difficult to schedule.
With all this said, designing SOCs on FPGAs with embedded processors is by far easier than designing equivalent ASICs due to the inherent flexibility and speed of modifications. more...
Art of debugging
The debugging of complex systems is the most challenging, frustrating and time-consuming part of the system design process. Systems like FPGAs (Field Programmable Gate Arrays) with internal embedded processors present a special challenge due to vastly increased complexity, very limited visibility to the internal operations, and the heterogeneous hardware/software nature of the designs.
The word "Art" is used here in its very basic meaning: a branch of activity, using a special medium and technique that do not rely exclusively on the scientific method. Not scientific in engineering? This sounds contrary to common sense. But, it turns out that the debugging process requires not only a lot of knowledge and dedicated tools, but also instinct and imagination on the part of the tester. This happens due to the lack of complete information about the source of the error, as is almost always the case.
There are various reasons that can cause a system malfunction:
- Designer's errors;
- Omitted states or state transitions;
- Interface problems;
- The physical behavior of underlying systems and technologies;
- Defective components.
For the last fifty years a majority of computer storage systems were still based on record servers. However, the role of such systems and the challenges they faced, changed rapidly over time - from being an extension of internal computer storage to becoming an on-line depository of vast amounts of data. As no information was provided to storage servers about data content and its organization, it became very cumbersome to manage, backup and conduct data searches on these systems. As a result, storage systems are very dependent on applications and application servers utilizing their data.
In order to solve these problems we propose Object Servers, who can function as data object depositories that organize, search and backup the objects autonomously. Object Servers can be described as huge automated data warehouses, where each package contains some data, all referred to by its object ID, equivalent to a package tracking number. These packages can be of various sizes, from small records to large video files. The main difference between the Object Servers and existing storage systems is the level of abstraction in reference to data - data objects versus atomic bit records. This extended view provides an Object Server with new capabilities and functionality, raising the role and importance of data storage to a completely new level. more...
To SAN or not to SAN
Storage Area Networks ("SAN") are based on the assumption that STORAGE is separate from COMPUTING.The immediate corollary of this assumption was the introduction of SAN, network cloud that interconnects the servers and storage. However, the introduction of this network brought with it all the problems associated with any network: authentication (who is who), access control (who has access rights), sharing (how to access coherent data in an orderly fashion when data is being modified simultaneously by somebody else), predictable bandwidth (since resources can be seized by somebody else exactly when you need them), partitioning (how to assure that problems in some part of the network do not affect data in adjacent systems), reliability (the larger the system, the more things can, and will go wrong) and security (how to defend the precious data from malicious access).
This article addresses the strengths and weaknesses of SAN, and tries to shed some light on the problems and compromises involved with separation of STORAGE from COMPUTING. more...