Search results for: Fast path assignment
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1444

Search results for: Fast path assignment

124 Unsupervised Segmentation Technique for Acute Leukemia Cells Using Clustering Algorithms

Authors: N. H. Harun, A. S. Abdul Nasir, M. Y. Mashor, R. Hassan

Abstract:

Leukaemia is a blood cancer disease that contributes to the increment of mortality rate in Malaysia each year. There are two main categories for leukaemia, which are acute and chronic leukaemia. The production and development of acute leukaemia cells occurs rapidly and uncontrollable. Therefore, if the identification of acute leukaemia cells could be done fast and effectively, proper treatment and medicine could be delivered. Due to the requirement of prompt and accurate diagnosis of leukaemia, the current study has proposed unsupervised pixel segmentation based on clustering algorithm in order to obtain a fully segmented abnormal white blood cell (blast) in acute leukaemia image. In order to obtain the segmented blast, the current study proposed three clustering algorithms which are k-means, fuzzy c-means and moving k-means algorithms have been applied on the saturation component image. Then, median filter and seeded region growing area extraction algorithms have been applied, to smooth the region of segmented blast and to remove the large unwanted regions from the image, respectively. Comparisons among the three clustering algorithms are made in order to measure the performance of each clustering algorithm on segmenting the blast area. Based on the good sensitivity value that has been obtained, the results indicate that moving kmeans clustering algorithm has successfully produced the fully segmented blast region in acute leukaemia image. Hence, indicating that the resultant images could be helpful to haematologists for further analysis of acute leukaemia.

Keywords: Acute Leukaemia Images, Clustering Algorithms, Image Segmentation, Moving k-Means.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2789
123 A Study on Integrated Performance of Tap-Changing Transformer and SVC in Association with Power System Voltage Stability

Authors: Mahmood Reza Shakarami, Reza Sedaghati

Abstract:

Electricity market activities and a growing demand for electricity have led to heavily stressed power systems. This requires operation of the networks closer to their stability limits. Power system operation is affected by stability related problems, leading to unpredictable system behavior. Voltage stability refers to the ability of a power system to sustain appropriate voltage levels through large and small disturbances. Steady-state voltage stability is concerned with limits on the existence of steady-state operating points for the network. FACTS devices can be utilized to increase the transmission capacity, the stability margin and dynamic behavior or serve to ensure improved power quality. Their main capabilities are reactive power compensation, voltage control and power flow control. Among the FACTS controllers, Static Var Compensator (SVC) provides fast acting dynamic reactive compensation for voltage support during contingency events. In this paper, voltage stability assessment with appropriate representations of tap-changer transformers and SVC is investigated. Integrating both of these devices is the main topic of this paper. Effect of the presence of tap-changing transformers on static VAR compensator controller parameters and ratings necessary to stabilize load voltages at certain values are highlighted. The interrelation between transformer off nominal tap ratios and the SVC controller gains and droop slopes and the SVC rating are found. P-V curves are constructed to calculate loadability margins.

Keywords: SVC, voltage stability, P-V curve, reactive power, tap changing transformer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3022
122 A Multi-Agent Smart E-Market Design at Work for Shariah Compliant Islamic Banking

Authors: Wafa Ghonaim

Abstract:

Though quite fast on growth, Islamic financing at large, and its diverse instruments, is a controversial matter among scholars. This is evident from the ongoing debates on its Shariah compliance. Arguments, however, are inciting doubts and concerns among clients about its credibility, which is harming this lucrative sector. The work here investigates, particularly, some issues related to the Tawarruq instrument. The work examines the issues of linking Murabaha and Wakala contracts, the reselling of commodities to same traders, and the transfer of ownerships. The work affirms that a multi-agent smart electronic market design would facilitate Shariah compliance. The smart market exploits the rational decision-making capabilities of autonomous proxy agents that enable the clients, traders, brokers, and the bank buy and sell commodities, and manage transactions and cash flow. The smart electronic market design delivers desirable qualities that terminate the need for Wakala contracts and the reselling of commodities to the same traders. It also resolves the ownership transfer issues by allowing stakeholders to trade independently. The bank administers the smart electronic market and assures reliability of trades, transactions and cash flow. A multi-agent simulation is presented to validate the concept and processes. We anticipate that the multi-agent smart electronic market design would deliver Shariah compliance of personal financing to the aspiration of scholars, banks, traders and potential clients.

Keywords: Islamic finance, Shariah compliance, smart electronic markets design, multi-agent systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 999
121 Modeling, Simulation and Monitoring of Nuclear Reactor Using Directed Graph and Bond Graph

Authors: A. Badoud, M. Khemliche, S. Latreche

Abstract:

The main objective developed in this paper is to find a graphic technique for modeling, simulation and diagnosis of the industrial systems. This importance is much apparent when it is about a complex system such as the nuclear reactor with pressurized water of several form with various several non-linearity and time scales. In this case the analytical approach is heavy and does not give a fast idea on the evolution of the system. The tool Bond Graph enabled us to transform the analytical model into graphic model and the software of simulation SYMBOLS 2000 specific to the Bond Graphs made it possible to validate and have the results given by the technical specifications. We introduce the analysis of the problem involved in the faults localization and identification in the complex industrial processes. We propose a method of fault detection applied to the diagnosis and to determine the gravity of a detected fault. We show the possibilities of application of the new diagnosis approaches to the complex system control. The industrial systems became increasingly complex with the faults diagnosis procedures in the physical systems prove to become very complex as soon as the systems considered are not elementary any more. Indeed, in front of this complexity, we chose to make recourse to Fault Detection and Isolation method (FDI) by the analysis of the problem of its control and to conceive a reliable system of diagnosis making it possible to apprehend the complex dynamic systems spatially distributed applied to the standard pressurized water nuclear reactor.

Keywords: Bond Graph, Modeling, Simulation, Monitoring, Analytical Redundancy Relations, Pressurized Water Reactor, Directed Graph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978
120 Customer Involvement in the Development of New Sustainable Products: A Review of the Literature

Authors: Natalia Moreira, Trevor Wood-Harper

Abstract:

The acceptance of sustainable products by the final consumer is still one of the challenges of the industry, which constantly seeks alternative approaches to successfully be accepted in the global market. A large set of methods and approaches have been discussed and analysed throughout the literature. Considering the current need for sustainable development and the current pace of consumption, the need for a combined solution towards the development of new products became clear, forcing researchers in product development to propose alternatives to the previous standard product development models. This paper presents, through a systemic analysis of the literature on product development, eco-design and consumer involvement, a set of alternatives regarding consumer involvement towards the development of sustainable products and how these approaches could help improve the sustainable industry’s establishment in the general market. Still being developed in the course of the author’s PhD, the initial findings of the research show that the understanding of the benefits of sustainable behaviour lead to a more conscious acquisition and eventually to the implementation of sustainable change in the consumer. Thus this paper is the initial approach towards the development of new sustainable products using the fashion industry as an example of practical implementation and acceptance by the consumers. By comparing the existing literature and critically analysing it, this paper concluded that the consumer involvement is strategic to improve the general understanding of sustainability and its features. The use of consumers and communities has been studied since the early 90s in order to exemplify uses and to guarantee a fast comprehension. The analysis done also includes the importance of this approach for the increase of innovation and ground breaking developments, thus requiring further research and practical implementation in order to better understand the implications and limitations of this methodology.

Keywords: Consumer involvement, Products development, Sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529
119 Process Optimization and Automation of Information Technology Services in a Heterogenic Digital Environment

Authors: Tasneem Halawani, Yamen Khateeb

Abstract:

With customers’ ever-increasing expectations for fast services provisioning for all their business needs, information technology (IT) organizations, as business partners, have to cope with this demanding environment and deliver their services in the most effective and efficient way. The purpose of this paper is to identify optimization and automation opportunities for the top requested IT services in a heterogenic digital environment and widely spread customer base. In collaboration with systems, processes, and subject matter experts (SMEs), the processes in scope were approached by analyzing four-year related historical data, identifying and surveying stakeholders, modeling the as-is processes, and studying systems integration/automation capabilities. This effort resulted in identifying several pain areas, including standardization, unnecessary customer and IT involvement, manual steps, systems integration, and performance measurement. These pain areas were addressed by standardizing the top five requested IT services, eliminating/automating 43 steps, and utilizing a single platform for end-to-end process execution. In conclusion, the optimization of IT service request processes in a heterogenic digital environment and widely spread customer base is challenging, yet achievable without compromising the service quality and customers’ added value. Further studies can focus on measuring the value of the eliminated/automated process steps to quantify the enhancement impact. Moreover, a similar approach can be utilized to optimize other IT service requests, with a focus on business criticality.

Keywords: Automation, customer value, heterogenic, integration, IT services, optimization, processes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 665
118 A Review on Cloud Computing and Internet of Things

Authors: Sahar S. Tabrizi, Dogan Ibrahim

Abstract:

Cloud Computing is a convenient model for on-demand networks that uses shared pools of virtual configurable computing resources, such as servers, networks, storage devices, applications, etc. The cloud serves as an environment for companies and organizations to use infrastructure resources without making any purchases and they can access such resources wherever and whenever they need. Cloud computing is useful to overcome a number of problems in various Information Technology (IT) domains such as Geographical Information Systems (GIS), Scientific Research, e-Governance Systems, Decision Support Systems, ERP, Web Application Development, Mobile Technology, etc. Companies can use Cloud Computing services to store large amounts of data that can be accessed from anywhere on Earth and also at any time. Such services are rented by the client companies where the actual rent depends upon the amount of data stored on the cloud and also the amount of processing power used in a given time period. The resources offered by the cloud service companies are flexible in the sense that the user companies can increase or decrease their storage requirements or the processing power requirements at any time, thus minimizing the overall rental cost of the service they receive. In addition, the Cloud Computing service providers offer fast processors and applications software that can be shared by their clients. This is especially important for small companies with limited budgets which cannot afford to purchase their own expensive hardware and software. This paper is an overview of the Cloud Computing, giving its types, principles, advantages, and disadvantages. In addition, the paper gives some example engineering applications of Cloud Computing and makes suggestions for possible future applications in the field of engineering.

Keywords: Cloud computing, cloud services, IaaS, PaaS, SaaS, IoT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1390
117 CFD-Parametric Study in Stator Heat Transfer of an Axial Flux Permanent Magnet Machine

Authors: Alireza Rasekh, Peter Sergeant, Jan Vierendeels

Abstract:

This paper copes with the numerical simulation for convective heat transfer in the stator disk of an axial flux permanent magnet (AFPM) electrical machine. Overheating is one of the main issues in the design of AFMPs, which mainly occurs in the stator disk, so that it needs to be prevented. A rotor-stator configuration with 16 magnets at the periphery of the rotor is considered. Air is allowed to flow through openings in the rotor disk and channels being formed between the magnets and in the gap region between the magnets and the stator surface. The rotating channels between the magnets act as a driving force for the air flow. The significant non-dimensional parameters are the rotational Reynolds number, the gap size ratio, the magnet thickness ratio, and the magnet angle ratio. The goal is to find correlations for the Nusselt number on the stator disk according to these non-dimensional numbers. Therefore, CFD simulations have been performed with the multiple reference frame (MRF) technique to model the rotary motion of the rotor and the flow around and inside the machine. A minimization method is introduced by a pattern-search algorithm to find the appropriate values of the reference temperature. It is found that the correlations are fast, robust and is capable of predicting the stator heat transfer with a good accuracy. The results reveal that the magnet angle ratio diminishes the stator heat transfer, whereas the rotational Reynolds number and the magnet thickness ratio improve the convective heat transfer. On the other hand, there a certain gap size ratio at which the stator heat transfer reaches a maximum.

Keywords: Axial flux permanent magnet, CFD, magnet parameters, stator heat transfer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1479
116 Q-Map: Clinical Concept Mining from Clinical Documents

Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala

Abstract:

Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.

Keywords: Information retrieval (IR), unified medical language system (UMLS), Syntax Based Analysis, natural language processing (NLP), medical informatics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 779
115 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code

Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader

Abstract:

In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.

Keywords: Bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913
114 Effective Teaching Pyramid and Its Impact on Enhancing the Participation of Students in Swimming Classes

Authors: Salam M. H. Kareem

Abstract:

Instructional or teaching procedures and their proper sequence are essential for high-quality learning outcomes. These actions are the path that the teacher takes during the learning process after setting the learning objectives. Teachers and specialists in the education field should include teaching procedures with putting in place an effective mechanism for the procedure’s implementation to achieve a logical sequence with the desired output of overall education process. Determining the sequence of these actions may be a strategic process outlined by a strategic educational plan or drawn by teachers with a high level of experience, enabling them to determine those logical procedures. While specific actions may be necessary for a specific form, many Physical Education (PE) teachers can work out on various sports disciplines. This study was conducted to investigate the impact of using the teaching sequence of the teaching pyramid in raising the level of enjoyment in swimming classes. Four months later of teaching swimming skills to the control and experimental groups of the study, we figured that using the tools shown in the teaching pyramid with the experimental group led to statistically significant differences in the positive tendencies of students to participate in the swimming classes by using the traditional procedures of teaching and using of successive procedures in the teaching pyramid, and in favor of the teaching pyramid, The students are influenced by enhancing their tendency to participate in swimming classes when the teaching procedures followed are sensitive to individual differences and are based on the element of pleasure in learning, and less positive levels of the tendency of students when using traditional teaching procedures, by getting the level of skills' requirements higher and more difficult to perform. The level of positive tendencies of students when using successive procedures in the teaching pyramid was increased, by getting the level of skills' requirements higher and more difficult to perform, because of the high level of motivation and the desire to challenge the self-provided by the teaching pyramid.

Keywords: Physical education, swimming classes, teaching process, teaching pyramid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1109
113 Spatial Clustering Model of Vessel Trajectory to Extract Sailing Routes Based on AIS Data

Authors: Lubna Eljabu, Mohammad Etemad, Stan Matwin

Abstract:

The automatic extraction of shipping routes is advantageous for intelligent traffic management systems to identify events and support decision-making in maritime surveillance. At present, there is a high demand for the extraction of maritime traffic networks that resemble the real traffic of vessels accurately, which is valuable for further analytical processing tasks for vessels trajectories (e.g., naval routing and voyage planning, anomaly detection, destination prediction, time of arrival estimation). With the help of big data and processing huge amounts of vessels’ trajectory data, it is possible to learn these shipping routes from the navigation history of past behaviour of other, similar ships that were travelling in a given area. In this paper, we propose a spatial clustering model of vessels’ trajectories (SPTCLUST) to extract spatial representations of sailing routes from historical Automatic Identification System (AIS) data. The whole model consists of three main parts: data preprocessing, path finding, and route extraction, which consists of clustering and representative trajectory extraction. The proposed clustering method provides techniques to overcome the problems of: (i) optimal input parameters selection; (ii) the high complexity of processing a huge volume of multidimensional data; (iii) and the spatial representation of complete representative trajectory detection in the context of trajectory clustering algorithms. The experimental evaluation showed the effectiveness of the proposed model by using a real-world AIS dataset from the Port of Halifax. The results contribute to further understanding of shipping route patterns. This could aid surveillance authorities in stable and sustainable vessel traffic management.

Keywords: Vessel trajectory clustering, trajectory mining, Spatial Clustering, marine intelligent navigation, maritime traffic network extraction, sdailing routes extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 455
112 Verification of On-Line Vehicle Collision Avoidance Warning System using DSRC

Authors: C. W. Hsu, C. N. Liang, L. Y. Ke, F. Y. Huang

Abstract:

Many accidents were happened because of fast driving, habitual working overtime or tired spirit. This paper presents a solution of remote warning for vehicles collision avoidance using vehicular communication. The development system integrates dedicated short range communication (DSRC) and global position system (GPS) with embedded system into a powerful remote warning system. To transmit the vehicular information and broadcast vehicle position; DSRC communication technology is adopt as the bridge. The proposed system is divided into two parts of the positioning andvehicular units in a vehicle. The positioning unit is used to provide the position and heading information from GPS module, and furthermore the vehicular unit is used to receive the break, throttle, and othersignals via controller area network (CAN) interface connected to each mechanism. The mobile hardware are built with an embedded system using X86 processor in Linux system. A vehicle is communicated with other vehicles via DSRC in non-addressed protocol with wireless access in vehicular environments (WAVE) short message protocol. From the position data and vehicular information, this paper provided a conflict detection algorithm to do time separation and remote warning with error bubble consideration. And the warning information is on-line displayed in the screen. This system is able to enhance driver assistance service and realize critical safety by using vehicular information from the neighbor vehicles.KeywordsDedicated short range communication, GPS, Control area network, Collision avoidance warning system.

Keywords: Dedicated short range communication, GPS, Control area network, Collision avoidance warning system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2206
111 Analysis of Seismic Waves Generated by Blasting Operations and their Response on Buildings

Authors: S. Ziaran, M. Musil, M. Cekan, O. Chlebo

Abstract:

The paper analyzes the response of buildings and industrially structures on seismic waves (low frequency mechanical vibration) generated by blasting operations. The principles of seismic analysis can be applied for different kinds of excitation such as: earthquakes, wind, explosions, random excitation from local transportation, periodic excitation from large rotating and/or machines with reciprocating motion, metal forming processes such as forging, shearing and stamping, chemical reactions, construction and earth moving work, and other strong deterministic and random energy sources caused by human activities. The article deals with the response of seismic, low frequency, mechanical vibrations generated by nearby blasting operations on a residential home. The goal was to determine the fundamental natural frequencies of the measured structure; therefore it is important to determine the resonant frequencies to design a suitable modal damping. The article also analyzes the package of seismic waves generated by blasting (Primary waves – P-waves and Secondary waves S-waves) and investigated the transfer regions. For the detection of seismic waves resulting from an explosion, the Fast Fourier Transform (FFT) and modal analysis, in the frequency domain, is used and the signal was acquired and analyzed also in the time domain. In the conclusions the measured results of seismic waves caused by blasting in a nearby quarry and its effect on a nearby structure (house) is analyzed. The response on the house, including the fundamental natural frequency and possible fatigue damage is also assessed.

Keywords: Building structure, seismic waves, spectral analysis, structural response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5297
110 FEA for Transient Responses of an S-Shaped Force Transducer with a Viscoelastic Absorber Using a Nonlinear Complex Spring

Authors: T. Yamaguchi, Y. Fujii, A. Takita, T. Kanai

Abstract:

To compute dynamic characteristics of nonlinear viscoelastic springs with elastic structures having huge degree-of-freedom, Yamaguchi proposed a new fast numerical method using finite element method [1]-[2]. In this method, restoring forces of the springs are expressed using power series of their elongation. In the expression, nonlinear hysteresis damping is introduced. In this expression, nonlinear complex spring constants are introduced. Finite element for the nonlinear spring having complex coefficients is expressed and is connected to the elastic structures modeled by linear solid finite element. Further, to save computational time, the discrete equations in physical coordinate are transformed into the nonlinear ordinary coupled equations using normal coordinate corresponding to linear natural modes. In this report, the proposed method is applied to simulation for impact responses of a viscoelastic shock absorber with an elastic structure (an S-shaped structure) by colliding with a concentrated mass. The concentrated mass has initial velocities and collides with the shock absorber. Accelerations of the elastic structure and the concentrated mass are measured using Levitation Mass Method proposed by Fujii [3]. The calculated accelerations from the proposed FEM, corresponds to the experimental ones. Moreover, using this method, we also investigate dynamic errors of the S-shaped force transducer due to elastic mode in the S-shaped structure.

Keywords: Transient response, Finite Element analysis, Numerical analysis, Viscoelastic shock absorber, Force transducer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1756
109 A Technical Perspective on Roadway Safety in Eastern Province: Data Evaluation and Spatial Analysis

Authors: Muhammad Farhan, Sayed Faruque, Amr Mohammed, Sami Osman, Omar Al-Jabari, Abdul Almojil

Abstract:

Saudi Arabia in recent years has seen drastic increase in traffic related crashes. With population of over 29 million, Saudi Arabia is considered as a fast growing and emerging economy. The rapid population increase and economic growth has resulted in rapid expansion of transportation infrastructure, which has led to increase in road crashes. Saudi Ministry of Interior reported more than 7,000 people killed and 68,000 injured in 2011 ranking Saudi Arabia to be one of the worst worldwide in traffic safety. The traffic safety issues in the country also result in distress to road users and cause and economic loss exceeding 3.7 billion Euros annually. Keeping this in view, the researchers in Saudi Arabia are investigating ways to improve traffic safety conditions in the country. This paper presents a multilevel approach to collect traffic safety related data required to do traffic safety studies in the region. Two highway corridors including King Fahd Highway 39 kilometre and Gulf Cooperation Council Highway 42 kilometre long connecting the cities of Dammam and Khobar were selected as a study area. Traffic data collected included traffic counts, crash data, travel time data, and speed data. The collected data was analysed using geographic information system to evaluate any correlation. Further research is needed to investigate the effectiveness of traffic safety related data when collected in a concerted effort.

Keywords: Crash Data, Data Collection, Traffic Safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2351
108 Towards Innovation Performance among University Staff

Authors: C. S. Quah, S. P. L. Sim

Abstract:

This study examined how individuals in their respective teams contributed to innovation performance besides defining the term of innovation in their own respective views. This study also identified factors that motivated University staff to contribute to the innovation products. In addition, it examined whether there is a significant relationship between professional training level and the length of service among university staff towards innovation and to what extent do the two variables contributed towards innovative products. The significance of this study is that it revealed the strengths and weaknesses of the university staff when contributing to innovation performance. Stratified-random sampling was employed to determine the samples representing the population of lecturers in the study, involving 123 lecturers in one of the local universities in Malaysia. The method employed to analyze the data is through categorizing into themes for the open-ended questions besides using descriptive and inferential statistics for the quantitative data. This study revealed that two types of definition for the term “innovation” exist among the university staff, namely, creation of new product or new approach to do things as well as value-added creative way to upgrade or improve existing process and service to be more efficient. This study found that the most prominent factor that propels them towards innovation is to improve the product in order to benefit users, followed by selfsatisfaction and recognition. This implies that the staff in the organization viewed the creation of innovative products as a process of growth to fulfill the needs of others and also to realize their personal potential. This study also found that there was only a significant relationship between the professional training level and the length of service of 4 - 6 years among the university staff. The rest of the groups based on the length of service showed that there was no significant relationship with the professional training level towards innovation. Moreover, results of the study on directional measures depicted that the relationship for the length of service of 4- 6 years with professional training level among the university staff is quite weak. This implies that good organization management lies on the shoulders of the key leaders who enlighten the path to be followed by the staff.

Keywords: Innovation, length of service, performance, professional training level, motivation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
107 Accurate Control of a Pneumatic System using an Innovative Fuzzy Gain-Scheduling Pattern

Authors: M. G. Papoutsidakis, G. Chamilothoris, F. Dailami, N. Larsen, A Pipe

Abstract:

Due to their high power-to-weight ratio and low cost, pneumatic actuators are attractive for robotics and automation applications; however, achieving fast and accurate control of their position have been known as a complex control problem. A methodology for obtaining high position accuracy with a linear pneumatic actuator is presented. During experimentation with a number of PID classical control approaches over many operations of the pneumatic system, the need for frequent manual re-tuning of the controller could not be eliminated. The reason for this problem is thermal and energy losses inside the cylinder body due to the complex friction forces developed by the piston displacements. Although PD controllers performed very well over short periods, it was necessary in our research project to introduce some form of automatic gain-scheduling to achieve good long-term performance. We chose a fuzzy logic system to do this, which proved to be an easily designed and robust approach. Since the PD approach showed very good behaviour in terms of position accuracy and settling time, it was incorporated into a modified form of the 1st order Tagaki- Sugeno fuzzy method to build an overall controller. This fuzzy gainscheduler uses an input variable which automatically changes the PD gain values of the controller according to the frequency of repeated system operations. Performance of the new controller was significantly improved and the need for manual re-tuning was eliminated without a decrease in performance. The performance of the controller operating with the above method is going to be tested through a high-speed web network (GRID) for research purposes.

Keywords: Fuzzy logic, gain scheduling, leaky integrator, pneumatic actuator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750
106 Opponent Color and Curvelet Transform Based Image Retrieval System Using Genetic Algorithm

Authors: Yesubai Rubavathi Charles, Ravi Ramraj

Abstract:

In order to retrieve images efficiently from a large database, a unique method integrating color and texture features using genetic programming has been proposed. Opponent color histogram which gives shadow, shade, and light intensity invariant property is employed in the proposed framework for extracting color features. For texture feature extraction, fast discrete curvelet transform which captures more orientation information at different scales is incorporated to represent curved like edges. The recent scenario in the issues of image retrieval is to reduce the semantic gap between user’s preference and low level features. To address this concern, genetic algorithm combined with relevance feedback is embedded to reduce semantic gap and retrieve user’s preference images. Extensive and comparative experiments have been conducted to evaluate proposed framework for content based image retrieval on two databases, i.e., COIL-100 and Corel-1000. Experimental results clearly show that the proposed system surpassed other existing systems in terms of precision and recall. The proposed work achieves highest performance with average precision of 88.2% on COIL-100 and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3% on Corel. Thus, the experimental results confirm that the proposed content based image retrieval system architecture attains better solution for image retrieval.

Keywords: Content based image retrieval, Curvelet transform, Genetic algorithm, Opponent color histogram, Relevance feedback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1822
105 Evaluation of Electro-Flocculation for Biomass Production of Marine Microalgae Phaodactylum tricornutum

Authors: Luciana C. Ramos, Leandro J. Sousa, Antônio Ferreira da Silva, Valéria Gomes Oliveira Falcão, Suzana T. Cunha Lima

Abstract:

The commercial production of biodiesel using microalgae demands a high-energy input for harvesting biomass, making production economically unfeasible. Methods currently used involve mechanical, chemical, and biological procedures. In this work, a flocculation system is presented as a cost and energy effective process to increase biomass production of Phaeodactylum tricornutum. This diatom is the only species of the genus that present fast growth and lipid accumulation ability that are of great interest for biofuel production. The algae, selected from the Bank of Microalgae, Institute of Biology, Federal University of Bahia (Brazil), have been bred in tubular reactor with photoperiod of 12 h (clear/dark), providing luminance of about 35 μmol photons m-2s-1, and temperature of 22 °C. The medium used for growing cells was the Conway medium, with addition of silica. The seaweed growth curve was accompanied by cell count in Neubauer camera and by optical density in spectrophotometer, at 680 nm. The precipitation occurred at the end of the stationary phase of growth, 21 days after inoculation, using two methods: centrifugation at 5000 rpm for 5 min, and electro-flocculation at 19 EPD and 95 W. After precipitation, cells were frozen at -20 °C and, subsequently, lyophilized. Biomass obtained by electro-flocculation was approximately four times greater than the one achieved by centrifugation. The benefits of this method are that no addition of chemical flocculants is necessary and similar cultivation conditions can be used for the biodiesel production and pharmacological purposes. The results may contribute to improve biodiesel production costs using marine microalgae.

Keywords: Biomass, diatom, flocculation, microalgae.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1365
104 Development of a Telemedical Network Supporting an Automated Flow Cytometric Analysis for the Clinical Follow-up of Leukaemia

Authors: Claude Takenga, Rolf-Dietrich Berndt, Erling Si, Markus Diem, Guohui Qiao, Melanie Gau, Michael Brandstoetter, Martin Kampel, Michael Dworzak

Abstract:

In patients with acute lymphoblastic leukaemia (ALL), treatment response is increasingly evaluated with minimal residual disease (MRD) analyses. Flow Cytometry (FCM) is a fast and sensitive method to detect MRD. However, the interpretation of these multi-parametric data requires intensive operator training and experience. This paper presents a pipeline-software, as a ready-to-use FCM-based MRD-assessment tool for the daily clinical practice for patients with ALL. The new tool increases accuracy in assessment of FCM-MRD in samples which are difficult to analyse by conventional operator-based gating since computer-aided analysis potentially has a superior resolution due to utilization of the whole multi-parametric FCM-data space at once instead of step-wise, two-dimensional plot-based visualization. The system developed as a telemedical network reduces the work-load and lab-costs, staff-time needed for training, continuous quality control, operator-based data interpretation. It allows dissemination of automated FCM-MRD analysis to medical centres which have no established expertise for the benefit of an even larger community of diseased children worldwide. We established a telemedical network system for analysis and clinical follow-up and treatment monitoring of Leukaemia. The system is scalable and adapted to link several centres and laboratories worldwide.

Keywords: Data security, flow cytometry, leukaemia, telematics platform, telemedicine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568
103 Efficient Compact Micro DBD Plasma Reactor for Ozone Generation for Industrial Application in Liquid and Gas Phase Systems

Authors: Kuvshinov, D., Siswanto, A., Lozano-Parada, J., Zimmerman, W. B.

Abstract:

Ozone is well known as a powerful, fast reacting oxidant. Ozone based processes produce no by-product residual as non-reacted ozone decomposes to molecular oxygen. Therefore an application of ozone is widely accepted as one of the main approaches for a Sustainable and Clean Technologies development.

There are number of technologies which require ozone to be delivered to specific points of a production network or reactors construction. Due to space constraints, high reactivity and short life time of ozone the use of ozone generators even of a bench top scale is practically limited. This requires development of mini/micro scale ozone generator which can be directly incorporated into production units.

Our report presents a feasibility study of a new micro scale rector for ozone generation (MROG). Data on MROG calibration and indigo decomposition at different operation conditions are presented.

At selected operation conditions with residence time of 0.25 s the process of ozone generation is not limited by reaction rate and the amount of ozone produced is a function of power applied. It was shown that the MROG is capable to produce ozone at voltage level starting from 3.5kV with ozone concentration of 5.28*10-6 (mol/L) at 5kV. This is in line with data presented on numerical investigation for a MROG. It was shown that in compare to a conventional ozone generator, MROG has lower power consumption at low voltages and atmospheric pressure.

The MROG construction makes it applicable for both submerged and dry systems. With a robust compact design MROG can be used as an integrated module for production lines of high complexity.

Keywords: DBD, micro reactor, ozone, plasma.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3004
102 Comparative Analysis of Control Techniques Based Sliding Mode for Transient Stability Assessment for Synchronous Multicellular Converter

Authors: Rihab Hamdi, Amel Hadri Hamida, Fatiha Khelili, Sakina Zerouali, Ouafae Bennis

Abstract:

This paper features a comparative study performance of sliding mode controller (SMC) for closed-loop voltage control of direct current to direct current (DC-DC) three-cells buck converter connected in parallel, operating in continuous conduction mode (CCM), based on pulse-width modulation (PWM) with SMC based on hysteresis modulation (HM) where an adaptive feedforward technique is adopted. On one hand, for the PWM-based SM, the approach is to incorporate a fixed-frequency PWM scheme which is effectively a variant of SM control. On the other hand, for the HM-based SM, oncoming an adaptive feedforward control that makes the hysteresis band variable in the hysteresis modulator of the SM controller in the aim to restrict the switching frequency variation in the case of any change of the line input voltage or output load variation are introduced. The results obtained under load change, input change and reference change clearly demonstrates a similar dynamic response of both proposed techniques, their effectiveness is fast and smooth tracking of the desired output voltage. The PWM-based SM technique has greatly improved the dynamic behavior with a bit advantageous compared to the HM-based SM technique, as well as provide stability in any operating conditions. Simulation studies in MATLAB/Simulink environment have been performed to verify the concept.

Keywords: Sliding mode control, pulse-width modulation, hysteresis modulation, DC-DC converter, parallel multi-cells converter, robustness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 778
101 Identification of Factors Influencing Company's Competitiveness

Authors: D. Ščeulovs, E. Gaile-Sarkane

Abstract:

Fast development of technologies, economic globalization and many other external circumstances stimulate company’s competitiveness. One of the major trends in today’s business is the shift to the exploitation of the Internet and electronic environment for entrepreneurial needs. Latest researches confirm that e-environment provides a range of possibilities and opportunities for companies, especially for micro-, small- and medium-sized companies, which have limited resources. The usage of e-tools raises the effectiveness and the profitability of an organization, as well as its competitiveness. In the electronic market, as in the classic one, there are factors, such as globalization, development of new technology, price sensitive consumers, Internet, new distribution and communication channels that influence entrepreneurship. As a result of eenvironment development, e-commerce and e-marketing grow as well.

Objective of the paper: To describe and identify factors influencing company’s competitiveness in e-environment.

Research methodology: The authors employ well-established quantitative and qualitative methods of research: grouping, analysis, statistics method, factor analysis in SPSS 20 environment, etc. The theoretical and methodological background of the research is formed by using scientific researches and publications, such as that from mass media and professional literature; statistical information from legal institutions as well as information collected by the authors during the surveying process. Research result: The authors detected and classified factors influencing competitiveness in e-environment. 

In this paper, the authors presented their findings based on theoretical, scientific, and field research. Authors have conducted a research on e-environment utilization among Latvian enterprises. 

Keywords: Competitiveness, e-environment, factors, factor analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2096
100 Applying Multiple Kinect on the Development of a Rapid 3D Mannequin Scan Platform

Authors: Shih-Wen Hsiao, Yi-Cheng Tsao

Abstract:

In the field of reverse engineering and creative industries, applying 3D scanning process to obtain geometric forms of the objects is a mature and common technique. For instance, organic objects such as faces and nonorganic objects such as products could be scanned to acquire the geometric information for further application. However, although the data resolution of 3D scanning device is increasing and there are more and more abundant complementary applications, the penetration rate of 3D scanning for the public is still limited by the relative high price of the devices. On the other hand, Kinect, released by Microsoft, is known for its powerful functions, considerably low price, and complete technology and database support. Therefore, related studies can be done with the applying of Kinect under acceptable cost and data precision. Due to the fact that Kinect utilizes optical mechanism to extracting depth information, limitations are found due to the reason of the straight path of the light. Thus, various angles are required sequentially to obtain the complete 3D information of the object when applying a single Kinect for 3D scanning. The integration process which combines the 3D data from different angles by certain algorithms is also required. This sequential scanning process costs much time and the complex integration process often encounter some technical problems. Therefore, this paper aimed to apply multiple Kinects simultaneously on the field of developing a rapid 3D mannequin scan platform and proposed suggestions on the number and angles of Kinects. In the content, a method of establishing the coordination based on the relation between mannequin and the specifications of Kinect is proposed, and a suggestion of angles and number of Kinects is also described. An experiment of applying multiple Kinect on the scanning of 3D mannequin is constructed by Microsoft API, and the results show that the time required for scanning and technical threshold can be reduced in the industries of fashion and garment design.

Keywords: 3D scan, depth sensor, fashion and garment design, mannequin, multiple kinect sensor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2276
99 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994
98 Iris Recognition Based On the Low Order Norms of Gradient Components

Authors: Iman A. Saad, Loay E. George

Abstract:

Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.

Keywords: Iris recognition, contrast stretching, gradient features, texture features, Euclidean metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965
97 Numerical Analysis of Pressure Admission Angle to Vane Angle Ratios on Performance of a Vaned Type Novel Air Turbine

Authors: B.R. Singh, O. Singh

Abstract:

Worldwide conventional resources of fossil fuel are depleting very fast due to large scale increase in use of transport vehicles every year, therefore consumption rate of oil in transport sector alone has gone very high. In view of this, the major thrust has now been laid upon the search of alternative energy source and also for cost effective energy conversion system. The air converted into compressed form by non conventional or conventional methods can be utilized as potential working fluid for producing shaft work in the air turbine and thus offering the capability of being a zero pollution energy source. This paper deals with the mathematical modeling and performance evaluation of a small capacity compressed air driven vaned type novel air turbine. Effect of expansion action and steady flow work in the air turbine at high admission air pressure of 6 bar, for varying injection to vane angles ratios 0.2-1.6, at the interval of 0.2 and at different vane angles such as 30o, 45o, 51.4o, 60o, 72o, 90o, and 120o for 12, 8, 7, 6, 5, 4 and 3 vanes respectively at speed of rotation 2500 rpm, has been quantified and analyzed here. Study shows that the expansion power has major contribution to total power, whereas the contribution of flow work output has been found varying only up to 19.4%. It is also concluded that for variation of injection to vane angle ratios from 0.2 to 1.2, the optimal power output is seen at vane angle 90o (4 vanes) and for 1.4 to 1.6 ratios, the optimal total power is observed at vane angle 72o (5 vanes). Thus in the vaned type novel air turbine the optimum shaft power output is developed when rotor contains 4-5 vanes for almost all situations of injection to vane angle ratios from 0.2 to 1.6.

Keywords: zero pollution, compressed air, air turbine, vaneangle, injection to vane angle ratios

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1709
96 Reduced Rule Based Fuzzy Logic Controlled Isolated Bidirectional Converter Operating in Extended Phase Shift Control for Bidirectional Energy Transfer

Authors: Anupam Kumar, Abdul Hamid Bhat, Pramod Agarwal

Abstract:

Bidirectional energy transfer capability with high efficiency and reduced cost is fast gaining prominence in the central part of a lot of power conversion systems in Direct Current (DC) microgrid. Preferably, under the economics constraints, these systems utilise a single high efficiency power electronics conversion system and a dual active bridge converter. In this paper, modeling and performance of Dual Active Bridge (DAB) converter with Extended Phase Shift (EPS) is evaluated with two batteries on both sides of DC bus and bidirectional energy transfer is facilitated and this is further compared with the Single Phase Shift (SPS) mode of operation. Optimum operating zone is identified through exhaustive simulations using MATLAB/Simulink and SimPowerSystem software. Reduced rules based fuzzy logic controller is implemented for closed loop control of DAB converter. The control logic enables the bidirectional energy transfer within the batteries even at lower duty ratios. Charging and discharging of batteries is supervised by the fuzzy logic controller. State of charge, current and voltage for both the batteries are plotted in the battery characteristics. Power characteristics of batteries are also obtained using MATLAB simulations.

Keywords: Fuzzy logic controller, rule base, membership functions, dual active bridge converter, bidirectional power flow, duty ratio, extended phase shift, state of charge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 870
95 Space Telemetry Anomaly Detection Based on Statistical PCA Algorithm

Authors: B. Nassar, W. Hussein, M. Mokhtar

Abstract:

The critical concern of satellite operations is to ensure the health and safety of satellites. The worst case in this perspective is probably the loss of a mission, but the more common interruption of satellite functionality can result in compromised mission objectives. All the data acquiring from the spacecraft are known as Telemetry (TM), which contains the wealth information related to the health of all its subsystems. Each single item of information is contained in a telemetry parameter, which represents a time-variant property (i.e. a status or a measurement) to be checked. As a consequence, there is a continuous improvement of TM monitoring systems to reduce the time required to respond to changes in a satellite's state of health. A fast conception of the current state of the satellite is thus very important to respond to occurring failures. Statistical multivariate latent techniques are one of the vital learning tools that are used to tackle the problem above coherently. Information extraction from such rich data sources using advanced statistical methodologies is a challenging task due to the massive volume of data. To solve this problem, in this paper, we present a proposed unsupervised learning algorithm based on Principle Component Analysis (PCA) technique. The algorithm is particularly applied on an actual remote sensing spacecraft. Data from the Attitude Determination and Control System (ADCS) was acquired under two operation conditions: normal and faulty states. The models were built and tested under these conditions, and the results show that the algorithm could successfully differentiate between these operations conditions. Furthermore, the algorithm provides competent information in prediction as well as adding more insight and physical interpretation to the ADCS operation.

Keywords: Space telemetry monitoring, multivariate analysis, PCA algorithm, space operations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2062