Search results for: minimal.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 211

Search results for: minimal.

31 Efficient Design Optimization of Multi-State Flow Network for Multiple Commodities

Authors: Yu-Cheng Chou, Po Ting Lin

Abstract:

The network of delivering commodities has been an important design problem in our daily lives and many transportation applications. The delivery performance is evaluated based on the system reliability of delivering commodities from a source node to a sink node in the network. The system reliability is thus maximized to find the optimal routing. However, the design problem is not simple because (1) each path segment has randomly distributed attributes; (2) there are multiple commodities that consume various path capacities; (3) the optimal routing must successfully complete the delivery process within the allowable time constraints. In this paper, we want to focus on the design optimization of the Multi-State Flow Network (MSFN) for multiple commodities. We propose an efficient approach to evaluate the system reliability in the MSFN with respect to randomly distributed path attributes and find the optimal routing subject to the allowable time constraints. The delivery rates, also known as delivery currents, of the path segments are evaluated and the minimal-current arcs are eliminated to reduce the complexity of the MSFN. Accordingly, the correct optimal routing is found and the worst-case reliability is evaluated. It has been shown that the reliability of the optimal routing is at least higher than worst-case measure. Two benchmark examples are utilized to demonstrate the proposed method. The comparisons between the original and the reduced networks show that the proposed method is very efficient.

Keywords: Multiple Commodities, Multi-State Flow Network (MSFN), Time Constraints, Worst-Case Reliability (WCR)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408
30 Overview Studies of High Strength Self-Consolidating Concrete

Authors: Raya Harkouss, Bilal Hamad

Abstract:

Self-Consolidating Concrete (SCC) is considered as a relatively new technology created as an effective solution to problems associated with low quality consolidation. A SCC mix is defined as successful if it flows freely and cohesively without the intervention of mechanical compaction. The construction industry is showing high tendency to use SCC in many contemporary projects to benefit from the various advantages offered by this technology.

At this point, a main question is raised regarding the effect of enhanced fluidity of SCC on the structural behavior of high strength self-consolidating reinforced concrete.

A three phase research program was conducted at the American University of Beirut (AUB) to address this concern. The first two phases consisted of comparative studies conducted on concrete and mortar mixes prepared with second generation Sulphonated Naphtalene-based superplasticizer (SNF) or third generation Polycarboxylate Ethers-based superplasticizer (PCE). The third phase of the research program investigates and compares the structural performance of high strength reinforced concrete beam specimens prepared with two different generations of superplasticizers that formed the unique variable between the concrete mixes. The beams were designed to test and exhibit flexure, shear, or bond splitting failure.

The outcomes of the experimental work revealed comparable resistance of beam specimens cast using self-compacting concrete and conventional vibrated concrete. The dissimilarities in the experimental values between the SCC and the control VC beams were minimal, leading to a conclusion, that the high consistency of SCC has little effect on the flexural, shear and bond strengths of concrete members.

Keywords: Self-consolidating concrete (SCC), high-strength concrete, concrete admixtures, mechanical properties of hardened SCC, structural behavior of reinforced concrete beams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2937
29 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations

Authors: Satyanadh Gundimada, Vijayan K Asari

Abstract:

A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.

Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1809
28 Modelling and Simulating CO2 Electro-Reduction to Formic Acid Using Microfluidic Electrolytic Cells: The Influence of Bi-Sn Catalyst and 1-Ethyl-3-Methyl Imidazolium Tetra-Fluoroborate Electrolyte on Cell Performance

Authors: Akan C. Offong, E. J. Anthony, Vasilije Manovic

Abstract:

A modified steady-state numerical model is developed for the electrochemical reduction of CO2 to formic acid. The numerical model achieves a CD (current density) (~60 mA/cm2), FE-faradaic efficiency (~98%) and conversion (~80%) for CO2 electro-reduction to formic acid in a microfluidic cell. The model integrates charge and species transport, mass conservation, and momentum with electrochemistry. Specifically, the influences of Bi-Sn based nanoparticle catalyst (on the cathode surface) at different mole fractions and 1-ethyl-3-methyl imidazolium tetra-fluoroborate ([EMIM][BF4]) electrolyte, on CD, FE and CO2 conversion to formic acid is studied. The reaction is carried out at a constant concentration of electrolyte (85% v/v., [EMIM][BF4]). Based on the mass transfer characteristics analysis (concentration contours), mole ratio 0.5:0.5 Bi-Sn catalyst displays the highest CO2 mole consumption in the cathode gas channel. After validating with experimental data (polarisation curves) from literature, extensive simulations reveal performance measure: CD, FE and CO2 conversion. Increasing the negative cathode potential increases the current densities for both formic acid and H2 formations. However, H2 formations are minimal as a result of insufficient hydrogen ions in the ionic liquid electrolyte. Moreover, the limited hydrogen ions have a negative effect on formic acid CD. As CO2 flow rate increases, CD, FE and CO2 conversion increases.

Keywords: Carbon dioxide, electro-chemical reduction, microfluidics, ionic liquids, modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1009
27 Development of a Telemedical Network Supporting an Automated Flow Cytometric Analysis for the Clinical Follow-up of Leukaemia

Authors: Claude Takenga, Rolf-Dietrich Berndt, Erling Si, Markus Diem, Guohui Qiao, Melanie Gau, Michael Brandstoetter, Martin Kampel, Michael Dworzak

Abstract:

In patients with acute lymphoblastic leukaemia (ALL), treatment response is increasingly evaluated with minimal residual disease (MRD) analyses. Flow Cytometry (FCM) is a fast and sensitive method to detect MRD. However, the interpretation of these multi-parametric data requires intensive operator training and experience. This paper presents a pipeline-software, as a ready-to-use FCM-based MRD-assessment tool for the daily clinical practice for patients with ALL. The new tool increases accuracy in assessment of FCM-MRD in samples which are difficult to analyse by conventional operator-based gating since computer-aided analysis potentially has a superior resolution due to utilization of the whole multi-parametric FCM-data space at once instead of step-wise, two-dimensional plot-based visualization. The system developed as a telemedical network reduces the work-load and lab-costs, staff-time needed for training, continuous quality control, operator-based data interpretation. It allows dissemination of automated FCM-MRD analysis to medical centres which have no established expertise for the benefit of an even larger community of diseased children worldwide. We established a telemedical network system for analysis and clinical follow-up and treatment monitoring of Leukaemia. The system is scalable and adapted to link several centres and laboratories worldwide.

Keywords: Data security, flow cytometry, leukaemia, telematics platform, telemedicine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1522
26 Specific Biomarker Level and Function Outcome Changes in Treatment of Patients with Frozen Shoulder Using Dextrose Prolotherapy Injection

Authors: Nuralam Sam, Irawan Yusuf, Irfan Idris, Endi Adnan

Abstract:

Frozen shoulder (FS) is an insidious, painful condition caused by an inflammatory condition that causes fibrosis of the glenohumeral joint capsule, which causes progressive stiffness and restriction of the active and passive range of motion (ROM) of the shoulder. The studies of FS are still limited. This single-blinded randomized controlled trial involved participants with FS. The study participants were divided into two groups. The Prolotherapy group was the study group, and the Normal Saline (NS) group was the control group. Both groups were given injections at weeks 0, 2, 4, and 6. Matrix Metalloproteinase-1 (MMP-1) and Tissue Inhibitor Metalloproteinase-1 (TIMP-1) were measured at week six and week 12 after the last injection. The Disabilities of The Arm, Shoulder, and Hand (DASH) Score and ROM were measured at weeks 0, 2, 4, and 6 before and after injection and week 12. Comparative analysis was performed using repeated measures Paired T-Test, and data processing to assess correlation was using ANOVA. The result showed a significant decrease in The Disability of the Arm, Shoulder, and Hand (DASH) score in prolotherapy injection patients in each measurement week (p < 0.05). While the measurement of ROM, each direction of shoulder motion showed a significant difference in average each week, from week 0 to week 6 (p < 0.05). Dextrose prolotherapy injection results significantly improved the functional outcome of the shoulder joint and ROM. They did not show significant results in assessing the specific biomarker, MMP-1, and TIMP-1, in tissue repair. This study suggests an alternative to injection prolotherapy in FS patients; it has minimal adverse effects and is efficient in time and cost.

Keywords: Frozen Shoulder, ROM, DASH Score, prolotherapy, MMP-1, TIMP-1.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 374
25 Analysis of Pressure Drop in a Concentrated Solar Collector with Direct Steam Production

Authors: Sara Sallam, Mohamed Taqi, Naoual Belouaggadia

Abstract:

Solar thermal power plants using parabolic trough collectors (PTC) are currently a powerful technology for generating electricity. Most of these solar power plants use thermal oils as heat transfer fluid. The latter is heated in the solar field and transfers the heat absorbed in an oil-water heat exchanger for the production of steam driving the turbines of the power plant. Currently, we are seeking to develop PTCs with direct steam generation (DSG). This process consists of circulating water under pressure in the receiver tube to generate steam directly into the solar loop. This makes it possible to reduce the investment and maintenance costs of the PTCs (the oil-water exchangers are removed) and to avoid the environmental risks associated with the use of thermal oils. The pressure drops in these systems are an important parameter to ensure their proper operation. The determination of these losses is complex because of the presence of the two phases, and most often we limit ourselves to describing them by models using empirical correlations. A comparison of these models with experimental data was performed. Our calculations focused on the evolution of the pressure of the liquid-vapor mixture along the receiver tube of a PTC-DSG for pressure values and inlet flow rates ranging respectively from 3 to 10 MPa, and from 0.4 to 0.6 kg/s. The comparison of the numerical results with experience allows us to demonstrate the validity of some models according to the pressures and the flow rates of entry in the PTC-DSG receiver tube. The analysis of these two parameters’ effects on the evolution of the pressure along the receiving tub, shows that the increase of the inlet pressure and the decrease of the flow rate lead to minimal pressure losses.

Keywords: Direct steam generation, parabolic trough collectors, pressure drop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 724
24 Dental Ethics versus Malpractice, as Phenomenon with a Growing Trend

Authors: Saimir Heta, Kers Kapaj, Rialda Xhizdari, Ilma Robo

Abstract:

Dealing with emerging cases of dental malpractice with justifications that stem from the clear rules of dental ethics is a phenomenon with an increasing trend in today's dental practice. Dentists should clearly understand how far the limit of malpractice goes, with or without minimal or major consequences, for the affected patient, which can be justified as a complication of dental treatment, in support of the rules of dental ethics in the dental office. Indeed, malpractice can occur in cases of lack of professionalism, but it can also come as a consequence of anatomical and physiological limitations in the implementation of the dental protocols, predetermined and indicated by the patient in the paragraph of the treatment plan in his personal card. Let this article serve as a short communication between readers and interested parties about the problems that dental malpractice can bring to the community. Malpractice should not be seen only as a professional wrong approach, but also as a phenomenon that can occur during dental practice. The aim of this article is presentation of the latest data published in the literature about malpractice. The combination of keywords is done in such a way with the aim to give the necessary space for collecting the right information in the networks of publications about this field, always first from the point of view of the dentist and not from that of the lawyer or jurist. From the findings included in this article, it was noticed that the diversity of approaches towards the phenomenon depends on the different countries based on the legal basis that these countries have. There is a lack of or a small number of articles that touch on this topic, and these articles are presented with a limited amount of data on the same topic. Dental malpractice should not be hidden under the guise of various dental complications that we justify with the strict rules of ethics for patients treated in the dental chair. The individual experience of dental malpractice must be published with the aim of serving as a source of experience for future generations of dentists.

Keywords: Dental ethics, malpractice, professional protocol, random deviation, dental tourism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 63
23 Soft Real-Time Fuzzy Task Scheduling for Multiprocessor Systems

Authors: Mahdi Hamzeh, Sied Mehdi Fakhraie, Caro Lucas

Abstract:

All practical real-time scheduling algorithms in multiprocessor systems present a trade-off between their computational complexity and performance. In real-time systems, tasks have to be performed correctly and timely. Finding minimal schedule in multiprocessor systems with real-time constraints is shown to be NP-hard. Although some optimal algorithms have been employed in uni-processor systems, they fail when they are applied in multiprocessor systems. The practical scheduling algorithms in real-time systems have not deterministic response time. Deterministic timing behavior is an important parameter for system robustness analysis. The intrinsic uncertainty in dynamic real-time systems increases the difficulties of scheduling problem. To alleviate these difficulties, we have proposed a fuzzy scheduling approach to arrange real-time periodic and non-periodic tasks in multiprocessor systems. Static and dynamic optimal scheduling algorithms fail with non-critical overload. In contrast, our approach balances task loads of the processors successfully while consider starvation prevention and fairness which cause higher priority tasks have higher running probability. A simulation is conducted to evaluate the performance of the proposed approach. Experimental results have shown that the proposed fuzzy scheduler creates feasible schedules for homogeneous and heterogeneous tasks. It also and considers tasks priorities which cause higher system utilization and lowers deadline miss time. According to the results, it performs very close to optimal schedule of uni-processor systems.

Keywords: Computational complexity, Deadline, Feasible scheduling, Fuzzy scheduling, Priority, Real-time multiprocessor systems, Robustness, System utilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2082
22 Applications of High Intensity Ultrasound to Modify Millet Protein Concentrate Functionality

Authors: B. Nazari, M. A. Mohammadifar, S. Shojaee-Aliabadi, L. Mirmoghtadaie

Abstract:

Millets as a new source of plant protein were not used in food applications due to its poor functional properties. In this study, the effect of high intensity ultrasound (frequency: 20 kHz, with contentious flow) (US) in 100% amplitude for varying times (5, 12.5, and 20 min) on solubility, emulsifying activity index (EAI), emulsion stability (ES), foaming capacity (FC), and foaming stability (FS) of millet protein concentrate (MPC) were evaluated. In addition, the structural properties of best treatments such as molecular weight and surface charge were compared with the control sample to prove the US effect. The US treatments significantly (P<0.05) increased the solubility of the native MPC (65.8±0.6%) at all sonicated times with the maximum solubility that is recorded at 12.5 min treatment (96.9±0.82 %). The FC of MPC was also significantly affected by the US treatment. Increase in sonicated time up to 12.5 min significantly increased the FC of native MPC (271.03±4.51 ml), but higher increase reduced it significantly. Minimal improvements were observed in the FS of all sonicated MPC compared to the native MPC. Sonicated time for 12.5 min affected the EAI and ES of the native MPC more markedly than 5 and 20 min that may be attributed to higher increase in proteins tendency to adsorption at the oil and water interfaces after the US treatment at this time. SDS-PAGE analysis showed changes in the molecular weight of MPC that attributed to shearing forces created by cavitation phenomenon. Also, this phenomenon caused an increase in the exposure of more amino acids with negative charge in the surface of US treated MPC, that was demonstrated by Zetasizer data. High intensity ultrasound, as a green technology, can significantly increase the functional properties of MPC and can make this usable for food applications.

Keywords: Millet protein concentrate, Functional properties, Structural properties, High intensity ultrasound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1660
21 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems

Authors: Bassam Istanbouli

Abstract:

With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them.  In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies;  the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system.  Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.

Keywords: Blueprint, ERP, SDLC, Modular.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 343
20 Context, Challenges, Constraints and Strategies of Non-Profit Organisations in Responding to the Needs of Asylum Seekers and Refugees in Cape Town, South Africa

Authors: C. O’Brien, Chloe Reiss

Abstract:

While South Africa has been the chosen host country for over 1,2 million asylum seekers/refugees it has at the same time, been struggling to address the needs of its own people who are still trapped in poverty with little prospects of employment. This limited exploratory, qualitative study was undertaken in Cape Town with a purposive sample of 21 key personnel from various NPOs providing a service to asylum seekers/refugees. Individual in-depth face to face interviews were carried out and the main findings were: Some of the officials at the Department of Home Affairs, health personnel, landlords, school principals, employers, bank officials and police officers were prejudicial in their practices towards asylum seekers/ refugees. The major constraints experienced by NPOs in this study were linked to a lack of funding and minimal government support, strained relationship with the Department of Home Affairs and difficulties in accessing refugees. And finally, the strategies adopted by these NPOs included networking with other service providers, engaging in advocacy, raising community awareness and liaising with government. Thus, more focused intervention strategies are needed to build social cohesion, address prejudices which fuels xenophobic attacks and raise awareness/educate various sectors about refugee rights. Given this burgeoning global problem, social work education and training should include curriculum content on migrant issues. Furthermore, larger studies using mixed methodology approaches would yield more nuanced data and provide for more strategic interventions.

Keywords: Refugees and asylum seekers, non-profit organisations, refugee challenges, constraints of service delivery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1023
19 Sustainable Energy Production with Closed-Loop Methods: Evaluating the Influence of Power Plant Age on Production Efficiency and Environmental Impact

Authors: Bujar Ismaili, Bahti Ismajli, Venhar Ismaili, Skender Ramadani

Abstract:

In Kosovo, the problem with the electricity supply is huge and it does not meet the demands of consumers. Older thermal power plants, which are regarded as big environmental polluters, produce most of the energy. Our experiment is based on the production of electricity using the closed method that does not affect environmental pollution by using waste as fuel that is considered to pollute the environment. The experiment was carried out in the village of Godanc, municipality of Shtime, Kosovo. In the experiment, a production line based on the production of electricity and central heating was designed at the same time. The results are the benefits of electricity as well as the release of temperature for heating with minimal expenses and with the release of 0% gases into the atmosphere. During this experiment, coal, plastic, waste from wood processing, and agricultural wastes were used as raw materials. The method utilized in the experiment allows for the release of gas through pipes and filters during the top-to-bottom combustion of the raw material in the boiler, followed by the method of gas filtration from waste wood processing (sawdust). During this process, the final product, gas, is obtained. This gas passes through the carburetor, enabling the combustion process to put the internal combustion machine and the generator into operation and produce electricity that does not release gases into the atmosphere. The results show that the system provides energy stability without environmental pollution from toxic substances and waste, as well as with low production costs. From the final results, it follows that, in the case of using coal fuel, we have benefited from more electricity and higher temperature release, followed by plastic waste, which also gave good results. The results obtained during these experiments prove that the current problems of lack of electricity and heating can be met at a lower cost and have a clean environment and waste management.

Keywords: Energy, heating, atmosphere, waste management, gasification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 142
18 Collective Redress in Consumer Protection in South East Europe: Cross-National Comparisons, Issues of Commonality and Difference

Authors: Veronika Efremova

Abstract:

In recent decades, there have been significant developments in the European Union in the field of collective consumer redress. South East European countries (SEE) covered by this paper, in line with their EU accession priorities and duties under Stabilisation and Association Agreements, have to harmonize their national laws with the relevant EU acquis for consumer protection (Chapter 28: Health and Consumer). In these countries, only minimal compliance is achieved. SEE countries have introduced rudimentary collective redress mechanisms, with modest enforcement of collective redress and case law. This paper is based on comprehensive interdisciplinary research conducted for SEE countries on common principles for injunctive and compensatory collective redress mechanisms, emphasizing cross-national comparisons, underlining issues of commonality and difference aiming to develop recommendations for an adequate enforcement of collective redress. SEE countries are recognized by the sectoral approach for regulating collective redress contrary to the majority of EU Member States with having adopted horizontal approach to collective redress. In most SEE countries, the laws do not recognize compensatory but only injunctive collective redress in consumer protection. All responsible stakeholders for implementation of collective redress in SEE countries, lack information and awareness on collective redress mechanisms and the way they function in practice. Therefore, specific actions are needed in these countries to make the whole system of collective redress for consumer protection operational and efficient. Taking into consideration the various designated stakeholders in collective redress in each SEE countries, there is a need of their mutual coordination and cooperation in order to develop consumer protection system and policies. By putting into practice the national collective redress mechanisms, effective access to justice for all consumers, the principle of rule of law will be secured and appropriate procedural guarantees to avoid abusive litigation will be ensured.

Keywords: Collective redress mechanism, consumer protection, commonality and difference, South East Europe.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 884
17 Taguchi Robust Design for Optimal Setting of Process Wastes Parameters in an Automotive Parts Manufacturing Company

Authors: Charles Chikwendu Okpala, Christopher Chukwutoo Ihueze

Abstract:

As a technique that reduces variation in a product by lessening the sensitivity of the design to sources of variation, rather than by controlling their sources, Taguchi Robust Design entails the designing of ideal goods, by developing a product that has minimal variance in its characteristics and also meets the desired exact performance. This paper examined the concept of the manufacturing approach and its application to brake pad product of an automotive parts manufacturing company. Although the firm claimed that only defects, excess inventory, and over-production were the few wastes that grossly affect their productivity and profitability, a careful study and analysis of their manufacturing processes with the application of Single Minute Exchange of Dies (SMED) tool showed that the waste of waiting is the fourth waste that bedevils the firm. The selection of the Taguchi L9 orthogonal array which is based on the four parameters and the three levels of variation for each parameter revealed that with a range of 2.17, that waiting is the major waste that the company must reduce in order to continue to be viable. Also, to enhance the company’s throughput and profitability, the wastes of over-production, excess inventory, and defects with ranges of 2.01, 1.46, and 0.82, ranking second, third, and fourth respectively must also be reduced to the barest minimum. After proposing -33.84 as the highest optimum Signal-to-Noise ratio to be maintained for the waste of waiting, the paper advocated for the adoption of all the tools and techniques of Lean Production System (LPS), and Continuous Improvement (CI), and concluded by recommending SMED in order to drastically reduce set up time which leads to unnecessary waiting.

Keywords: Taguchi Robust Design, signal to noise ratio, Single Minute Exchange of Dies, lean production system, waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 919
16 Synthesis of PVA/γ-Fe2O3 Used in Cancer Treatment by Hyperthermia

Authors: Sajjad Seifi Mofarah, S. K. Sadrnezhaad, Shokooh Moghadam, Javad Tavakoli

Abstract:

In recent years a new method of combination treatment for cancer has been developed and studied that has led to significant advancements in the field of cancer therapy. Hyperthermia is a traditional therapy that, along with a creation of a medically approved level of heat with the help of an alternating magnetic AC current, results in the destruction of cancer cells by heat. This paper gives details regarding the production of the spherical nanocomposite PVA/γ-Fe2O3 in order to be used for medical purposes such as tumor treatment by hyperthermia. To reach a suitable and evenly distributed temperature, the nanocomposite with core-shell morphology and spherical form within a 100 to 200 nanometer size was created using phase separation emulsion, in which the magnetic nano-particles γ- Fe2O3 with an average particle size of 20 nano-meters and with different percentages of 0.2, 0.4, 0.5 and 0.6 were covered by polyvinyl alcohol. The main concern in hyperthermia and heat treatment is achieving desirable specific absorption rate (SAR) and one of the most critical factors in SAR is particle size. In this project all attempts has been done to reach minimal size and consequently maximum SAR. The morphological analysis of the spherical structure of the nanocomposite PVA/γ-Fe2O3 was achieved by SEM analyses and the study of the chemical bonds created was made possible by FTIR analysis. To investigate the manner of magnetic nanocomposite particle size distribution a DLS experiment was conducted. Moreover, to determine the magnetic behavior of the γ- Fe2O3 particle and the nanocomposite PVA/γ-Fe2O3 in different concentrations a VSM test was conducted. To sum up, creating magnetic nanocomposites with a spherical morphology that would be employed for drug loading opens doors to new approaches in developing nanocomposites that provide efficient heat and a controlled release of drug simultaneously inside the magnetic field, which are among their positive characteristics that could significantly improve the recovery process in patients.

Keywords: Nanocomposite, hyperthermia, cancer therapy, drug release.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4699
15 Cluster Based Energy Efficient and Fault Tolerant n-Coverage in Wireless Sensor Network

Authors: D. Satish Kumar, N. Nagarajan

Abstract:

Coverage conservation and extend the network lifetime are the primary issues in wireless sensor networks. Due to the large variety of applications, coverage is focus to a wide range of interpretations. The applications necessitate that each point in the area is observed by only one sensor while other applications may require that each point is enclosed by at least sensors (n>1) to achieve fault tolerance. Sensor scheduling activities in existing Transparent and non- Transparent relay modes (T-NT) Mobile Multi-Hop relay networks fails to guarantee area coverage with minimal energy consumption and fault tolerance. To overcome these issues, Cluster based Energy Competent n- coverage scheme called (CEC n-coverage scheme) to ensure the full coverage of a monitored area while saving energy. CEC n-coverage scheme uses a novel sensor scheduling scheme based on the n-density and the remaining energy of each sensor to determine the state of all the deployed sensors to be either active or sleep as well as the state durations. Hence, it is attractive to trigger a minimum number of sensors that are able to ensure coverage area and turn off some redundant sensors to save energy and therefore extend network lifetime. In addition, decisive a smallest amount of active sensors based on the degree coverage required and its level. A variety of numerical parameters are computed using ns2 simulator on existing (T-NT) Mobile Multi-Hop relay networks and CEC n-coverage scheme. Simulation results showed that CEC n-coverage scheme in wireless sensor network provides better performance in terms of the energy efficiency, 6.61% reduced fault tolerant in terms of seconds and the percentage of active sensors to guarantee the area coverage compared to exiting algorithm.

Keywords: Wireless Sensor network, Mobile Multi-Hop relay networks, n-coverage, Cluster based Energy Competent, Transparent and non- Transparent relay modes, Fault Tolerant, sensor scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2106
14 The Planning and Development of Green Public Places in Urban South Africa: A Child-Friendly Approach

Authors: E. J. Cilliers, Z. Goosen

Abstract:

The impact that urban green spaces have on sustainability and quality of life is phenomenal. This is also true for the local South African environment. However, in reality green spaces in urban environments are decreasing due to growing populations, increasing urbanization and development pressure. This further impacts on the provision of child-friendly spaces, a concept that is already limited in local context. Child-friendly spaces are described as environments in which people (children) feel intimately connected to, influencing the physical, social, emotional, and ecological health of individuals and communities. The benefits of providing such spaces for the youth are well documented in literature. This research therefore aimed to investigate the concept of child-friendly spaces and its applicability to the South African planning context, in order to guide the planning of such spaces for future communities and use. Child-friendly spaces in the urban environment of the city of Durban, was used as local case study, along with two international case studies namely Mullerpier public playground in Rotterdam, the Netherlands, and Kadidjiny Park in Melville, Australia. The aim was to determine how these spaces were planned and developed and to identify tools that were used to accomplish the goal of providing successful child-friendly green spaces within urban areas. The need and significance of planning for such spaces was portrayed within the international case studies. It is confirmed that minimal provision is made for green space planning within the South African context, when there is reflected on the international examples. As a result international examples and disciples of providing child-friendly green spaces should direct planning guidelines within local context. The research concluded that child-friendly green spaces have a positive impact on the urban environment and assist in a child’s development and interaction with the natural environment. Regrettably, the planning of these child-friendly spaces is not given priority within current spatial plans, despite the proven benefits of such.

Keywords: Built environment, child-friendly spaces, green spaces. public places, urban area.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2239
13 GridNtru: High Performance PKCS

Authors: Narasimham Challa, Jayaram Pradhan

Abstract:

Cryptographic algorithms play a crucial role in the information society by providing protection from unauthorized access to sensitive data. It is clear that information technology will become increasingly pervasive, Hence we can expect the emergence of ubiquitous or pervasive computing, ambient intelligence. These new environments and applications will present new security challenges, and there is no doubt that cryptographic algorithms and protocols will form a part of the solution. The efficiency of a public key cryptosystem is mainly measured in computational overheads, key size and bandwidth. In particular the RSA algorithm is used in many applications for providing the security. Although the security of RSA is beyond doubt, the evolution in computing power has caused a growth in the necessary key length. The fact that most chips on smart cards can-t process key extending 1024 bit shows that there is need for alternative. NTRU is such an alternative and it is a collection of mathematical algorithm based on manipulating lists of very small integers and polynomials. This allows NTRU to high speeds with the use of minimal computing power. NTRU (Nth degree Truncated Polynomial Ring Unit) is the first secure public key cryptosystem not based on factorization or discrete logarithm problem. This means that given sufficient computational resources and time, an adversary, should not be able to break the key. The multi-party communication and requirement of optimal resource utilization necessitated the need for the present day demand of applications that need security enforcement technique .and can be enhanced with high-end computing. This has promoted us to develop high-performance NTRU schemes using approaches such as the use of high-end computing hardware. Peer-to-peer (P2P) or enterprise grids are proven as one of the approaches for developing high-end computing systems. By utilizing them one can improve the performance of NTRU through parallel execution. In this paper we propose and develop an application for NTRU using enterprise grid middleware called Alchemi. An analysis and comparison of its performance for various text files is presented.

Keywords: Alchemi, GridNtru, Ntru, PKCS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1642
12 Bee Parameter Determination via Weighted Centriod Modified Simplex and Constrained Response Surface Optimisation Methods

Authors: P. Luangpaiboon

Abstract:

Various intelligences and inspirations have been adopted into the iterative searching process called as meta-heuristics. They intelligently perform the exploration and exploitation in the solution domain space aiming to efficiently seek near optimal solutions. In this work, the bee algorithm, inspired by the natural foraging behaviour of honey bees, was adapted to find the near optimal solutions of the transportation management system, dynamic multi-zone dispatching. This problem prepares for an uncertainty and changing customers- demand. In striving to remain competitive, transportation system should therefore be flexible in order to cope with the changes of customers- demand in terms of in-bound and outbound goods and technological innovations. To remain higher service level but lower cost management via the minimal imbalance scenario, the rearrangement penalty of the area, in each zone, including time periods are also included. However, the performance of the algorithm depends on the appropriate parameters- setting and need to be determined and analysed before its implementation. BEE parameters are determined through the linear constrained response surface optimisation or LCRSOM and weighted centroid modified simplex methods or WCMSM. Experimental results were analysed in terms of best solutions found so far, mean and standard deviation on the imbalance values including the convergence of the solutions obtained. It was found that the results obtained from the LCRSOM were better than those using the WCMSM. However, the average execution time of experimental run using the LCRSOM was longer than those using the WCMSM. Finally a recommendation of proper level settings of BEE parameters for some selected problem sizes is given as a guideline for future applications.

Keywords: Meta-heuristic, Bee Algorithm, Dynamic Multi-Zone Dispatching, Linear Constrained Response SurfaceOptimisation Method, Weighted Centroid Modified Simplex Method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1333
11 Clinical Comparative Study Comparing Efficacy of Intrathecal Fentanyl and Magnesium as an Adjuvant to Hyperbaric Bupivacaine in Mild Pre-Eclamptic Patients Undergoing Caesarean Section

Authors: Sanchita B. Sarma, M. P. Nath

Abstract:

Adequate analgesia following caesarean section decreases morbidity, hastens ambulation, improves patient outcome and facilitates care of the newborn. Intrathecal magnesium, an NMDA antagonist, has been shown to prolong analgesia without significant side effects in healthy parturients. The aim of this study was to evaluate the onset and duration of sensory and motor block, hemodynamic effect, postoperative analgesia, and adverse effects of magnesium or fentanyl given intrathecally with hyperbaric 0.5% bupivacaine in patients with mild preeclampsia undergoing caesarean section. Sixty women with mild preeclampsia undergoing elective caesarean section were included in a prospective, double blind, controlled trial. Patients were randomly assigned to receive spinal anesthesia with 2 mL 0.5% hyperbaric bupivacaine with 12.5 μg fentanyl (group F) or 0.1 ml of 50% magnesium sulphate (50 mg) (group M) with 0.15ml preservative free distilled water. Onset, duration and recovery of sensory and motor block, time to maximum sensory block, duration of spinal anaesthesia and postoperative analgesic requirements were studied. Statistical comparison was carried out using the Chi-square or Fisher’s exact tests and Independent Student’s t-test where appropriate. The onset of both sensory and motor block was slower in the magnesium group. The duration of spinal anaesthesia (246 vs. 284) and motor block (186.3 vs. 210) were significantly longer in the magnesium group. Total analgesic top up requirement was less in group M. Hemodynamic parameters were similar in both the groups. Intrathecal magnesium caused minimal side effects. Since Fentanyl and other opioid congeners are not available throughout the country easily, magnesium with its easy availability and less side effect profile can be a cost effective alternative to fentanyl in managing pregnancy induced hypertension (PIH) patients given along with Bupivacaine intrathecally in caesarean section.

Keywords: Analgesia, magnesium, preeclampsia, spinal anaesthesia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2148
10 A Multi-Level WEB Based Parallel Processing System A Hierarchical Volunteer Computing Approach

Authors: Abdelrahman Ahmed Mohamed Osman

Abstract:

Over the past few years, a number of efforts have been exerted to build parallel processing systems that utilize the idle power of LAN-s and PC-s available in many homes and corporations. The main advantage of these approaches is that they provide cheap parallel processing environments for those who cannot afford the expenses of supercomputers and parallel processing hardware. However, most of the solutions provided are not very flexible in the use of available resources and very difficult to install and setup. In this paper, a multi-level web-based parallel processing system (MWPS) is designed (appendix). MWPS is based on the idea of volunteer computing, very flexible, easy to setup and easy to use. MWPS allows three types of subscribers: simple volunteers (single computers), super volunteers (full networks) and end users. All of these entities are coordinated transparently through a secure web site. Volunteer nodes provide the required processing power needed by the system end users. There is no limit on the number of volunteer nodes, and accordingly the system can grow indefinitely. Both volunteer and system users must register and subscribe. Once, they subscribe, each entity is provided with the appropriate MWPS components. These components are very easy to install. Super volunteer nodes are provided with special components that make it possible to delegate some of the load to their inner nodes. These inner nodes may also delegate some of the load to some other lower level inner nodes .... and so on. It is the responsibility of the parent super nodes to coordinate the delegation process and deliver the results back to the user. MWPS uses a simple behavior-based scheduler that takes into consideration the current load and previous behavior of processing nodes. Nodes that fulfill their contracts within the expected time get a high degree of trust. Nodes that fail to satisfy their contract get a lower degree of trust. MWPS is based on the .NET framework and provides the minimal level of security expected in distributed processing environments. Users and processing nodes are fully authenticated. Communications and messages between nodes are very secure. The system has been implemented using C#. MWPS may be used by any group of people or companies to establish a parallel processing or grid environment.

Keywords: Volunteer computing, Parallel Processing, XMLWebServices, .NET Remoting, Tuplespace.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1446
9 Monetary Evaluation of Dispatching Decisions in Consideration of Mode Choice Models

Authors: Marcel Schneider, Nils Nießen

Abstract:

Microscopic simulation tool kits allow for consideration of the two processes of railway operations and the previous timetable production. Block occupation conflicts on both process levels are often solved by using defined train priorities. These conflict resolutions (dispatching decisions) generate reactionary delays to the involved trains. The sum of reactionary delays is commonly used to evaluate the quality of railway operations, which describes the timetable robustness. It is either compared to an acceptable train performance or the delays are appraised economically by linear monetary functions. It is impossible to adequately evaluate dispatching decisions without a well-founded objective function. This paper presents a new approach for the evaluation of dispatching decisions. The approach uses mode choice models and considers the behaviour of the end-customers. These models evaluate the reactionary delays in more detail and consider other competing modes of transport. The new approach pursues the coupling of a microscopic model of railway operations with the macroscopic choice mode model. At first, it will be implemented for railway operations process but it can also be used for timetable production. The evaluation considers the possibility for the customer to interchange to other transport modes. The new approach starts to look at rail and road, but it can also be extended to air travel. The result of mode choice models is the modal split. The reactions by the end-customers have an impact on the revenue of the train operating companies. Different purposes of travel have different payment reserves and tolerances towards late running. Aside from changes to revenues, longer journey times can also generate additional costs. The costs are either time- or track-specific and arise from required changes to rolling stock or train crew cycles. Only the variable values are summarised in the contribution margin, which is the base for the monetary evaluation of delays. The contribution margin is calculated for different possible solutions to the same conflict. The conflict resolution is optimised until the monetary loss becomes minimal. The iterative process therefore determines an optimum conflict resolution by monitoring the change to the contribution margin. Furthermore, a monetary value of each dispatching decision can also be derived.

Keywords: Choice of mode, monetary evaluation, railway operations, reactionary delays.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
8 Comparative Analysis of Chemical Composition and Biological Activities of Ajuga genevensis L. in in vitro Culture and Intact Plants

Authors: Naira Sahakyan, Margarit Petrosyan, Armen Trchounian

Abstract:

One of the tasks in contemporary biotechnology, pharmacology and other fields of human activities is to obtain biologically active substances from plants. They are very essential in the treatment of many diseases due to their actually high therapeutic value without visible side effects. However, sometimes the possibility of obtaining the metabolites is limited due to the reduction of wild-growing plants. That is why the plant cell cultures are of great interest as alternative sources of biologically active substances. Besides, during the monitored cultivation, it is possible to obtain substances that are not synthesized by plants in nature. Isolated culture of Ajuga genevensis with high growth activity and ability of regeneration was obtained using MS nutrient medium. The agar-diffusion method showed that aqueous extracts of callus culture revealed high antimicrobial activity towards various gram-positive (Bacillus subtilis A1WT; B. mesentericus WDCM 1873; Staphylococcus aureus WDCM 5233; Staph. citreus WT) and gram-negative (Escherichia coli WKPM M-17; Salmonella typhimurium TA 100) microorganisms. The broth dilution method revealed that the minimal and half maximal inhibitory concentration values against E. coli corresponded to the 70 μg/mL and 140 μg/mL concentration of the extract respectively. According to the photochemiluminescent analysis, callus tissue extracts of leaf and root origin showed higher antioxidant activity than the same quantity of A. genevensis intact plant extract. A. genevensis intact plant and callus culture extracts showed no cytotoxic effect on K-562 suspension cell line of human chronic myeloid leukemia. The GC-MS analysis showed deep differences between the qualitative and quantitative composition of callus culture and intact plant extracts. Hexacosane (11.17%); n-hexadecanoic acid (9.33%); and 2-methoxy-4-vinylphenol (4.28%) were the main components of intact plant extracts. 10-Methylnonadecane (57.0%); methoxyacetic acid, 2-tetradecyl ester (17.75%) and 1-Bromopentadecane (14.55%) were the main components of A. genevensis callus culture extracts. Obtained data indicate that callus culture of A. genevensis can be used as an alternative source of biologically active substances.

Keywords: Ajuga genevensis, antibacterial activity, antioxidant activity, callus cultures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513
7 Depth-Averaged Modelling of Erosion and Sediment Transport in Free-Surface Flows

Authors: Thomas Rowan, Mohammed Seaid

Abstract:

A fast finite volume solver for multi-layered shallow water flows with mass exchange and an erodible bed is developed. This enables the user to solve a number of complex sediment-based problems including (but not limited to), dam-break over an erodible bed, recirculation currents and bed evolution as well as levy and dyke failure. This research develops methodologies crucial to the under-standing of multi-sediment fluvial mechanics and waterway design. In this model mass exchange between the layers is allowed and, in contrast to previous models, sediment and fluid are able to transfer between layers. In the current study we use a two-step finite volume method to avoid the solution of the Riemann problem. Entrainment and deposition rates are calculated for the first time in a model of this nature. In the first step the governing equations are rewritten in a non-conservative form and the intermediate solutions are calculated using the method of characteristics. In the second stage, the numerical fluxes are reconstructed in conservative form and are used to calculate a solution that satisfies the conservation property. This method is found to be considerably faster than other comparative finite volume methods, it also exhibits good shock capturing. For most entrainment and deposition equations a bed level concentration factor is used. This leads to inaccuracies in both near bed level concentration and total scour. To account for diffusion, as no vertical velocities are calculated, a capacity limited diffusion coefficient is used. The additional advantage of this multilayer approach is that there is a variation (from single layer models) in bottom layer fluid velocity: this dramatically reduces erosion, which is often overestimated in simulations of this nature using single layer flows. The model is used to simulate a standard dam break. In the dam break simulation, as expected, the number of fluid layers utilised creates variation in the resultant bed profile, with more layers offering a higher deviation in fluid velocity . These results showed a marked variation in erosion profiles from standard models. The overall the model provides new insight into the problems presented at minimal computational cost.

Keywords: Erosion, finite volume method, sediment transport, shallow water equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 947
6 Comparison of Developed Statokinesigram and Marker Data Signals by Model Approach

Authors: Boris Barbolyas, Kristina Buckova, Tomas Volensky, Cyril Belavy, Ladislav Dedik

Abstract:

Background: Based on statokinezigram, the human balance control is often studied. Approach to human postural reaction analysis is based on a combination of stabilometry output signal with retroreflective marker data signal processing, analysis, and understanding, in this study. The study shows another original application of Method of Developed Statokinesigram Trajectory (MDST), too. Methods: In this study, the participants maintained quiet bipedal standing for 10 s on stabilometry platform. Consequently, bilateral vibration stimuli to Achilles tendons in 20 s interval was applied. Vibration stimuli caused that human postural system took the new pseudo-steady state. Vibration frequencies were 20, 60 and 80 Hz. Participant's body segments - head, shoulders, hips, knees, ankles and little fingers were marked by 12 retroreflective markers. Markers positions were scanned by six cameras system BTS SMART DX. Registration of their postural reaction lasted 60 s. Sampling frequency was 100 Hz. For measured data processing were used Method of Developed Statokinesigram Trajectory. Regression analysis of developed statokinesigram trajectory (DST) data and retroreflective marker developed trajectory (DMT) data were used to find out which marker trajectories most correlate with stabilometry platform output signals. Scaling coefficients (λ) between DST and DMT by linear regression analysis were evaluated, too. Results: Scaling coefficients for marker trajectories were identified for all body segments. Head markers trajectories reached maximal value and ankle markers trajectories had a minimal value of scaling coefficient. Hips, knees and ankles markers were approximately symmetrical in the meaning of scaling coefficient. Notable differences of scaling coefficient were detected in head and shoulders markers trajectories which were not symmetrical. The model of postural system behavior was identified by MDST. Conclusion: Value of scaling factor identifies which body segment is predisposed to postural instability. Hypothetically, if statokinesigram represents overall human postural system response to vibration stimuli, then markers data represented particular postural responses. It can be assumed that cumulative sum of particular marker postural responses is equal to statokinesigram.

Keywords: Center of pressure (CoP), a method of developed statokinesigram trajectory (MDST), a model of postural system behavior, retroreflective marker data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 709
5 Power and Delay Optimized Graph Representation for Combinational Logic Circuits

Authors: Padmanabhan Balasubramanian, Karthik Anantha

Abstract:

Structural representation and technology mapping of a Boolean function is an important problem in the design of nonregenerative digital logic circuits (also called combinational logic circuits). Library aware function manipulation offers a solution to this problem. Compact multi-level representation of binary networks, based on simple circuit structures, such as AND-Inverter Graphs (AIG) [1] [5], NAND Graphs, OR-Inverter Graphs (OIG), AND-OR Graphs (AOG), AND-OR-Inverter Graphs (AOIG), AND-XORInverter Graphs, Reduced Boolean Circuits [8] does exist in literature. In this work, we discuss a novel and efficient graph realization for combinational logic circuits, represented using a NAND-NOR-Inverter Graph (NNIG), which is composed of only two-input NAND (NAND2), NOR (NOR2) and inverter (INV) cells. The networks are constructed on the basis of irredundant disjunctive and conjunctive normal forms, after factoring, comprising terms with minimum support. Construction of a NNIG for a non-regenerative function in normal form would be straightforward, whereas for the complementary phase, it would be developed by considering a virtual instance of the function. However, the choice of best NNIG for a given function would be based upon literal count, cell count and DAG node count of the implementation at the technology independent stage. In case of a tie, the final decision would be made after extracting the physical design parameters. We have considered AIG representation for reduced disjunctive normal form and the best of OIG/AOG/AOIG for the minimized conjunctive normal forms. This is necessitated due to the nature of certain functions, such as Achilles- heel functions. NNIGs are found to exhibit 3.97% lesser node count compared to AIGs and OIG/AOG/AOIGs; consume 23.74% and 10.79% lesser library cells than AIGs and OIG/AOG/AOIGs for the various samples considered. We compare the power efficiency and delay improvement achieved by optimal NNIGs over minimal AIGs and OIG/AOG/AOIGs for various case studies. In comparison with functionally equivalent, irredundant and compact AIGs, NNIGs report mean savings in power and delay of 43.71% and 25.85% respectively, after technology mapping with a 0.35 micron TSMC CMOS process. For a comparison with OIG/AOG/AOIGs, NNIGs demonstrate average savings in power and delay by 47.51% and 24.83%. With respect to device count needed for implementation with static CMOS logic style, NNIGs utilize 37.85% and 33.95% lesser transistors than their AIG and OIG/AOG/AOIG counterparts.

Keywords: AND-Inverter Graph, OR-Inverter Graph, DirectedAcyclic Graph, Low power design, Delay optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2010
4 Crash and Injury Characteristics of Riders in Motorcycle-Passenger Vehicle Crashes

Authors: Z. A. Ahmad Noor Syukri, A. J. Nawal Aswan, S. V. Wong

Abstract:

The motorcycle has become one of the most common type of vehicles used on the road, particularly in the Asia region, including Malaysia, due to its size-convenience and affordable price. This study focuses only on crashes involving motorcycles with passenger cars consisting 43 real world crashes obtained from in-depth crash investigation process from June 2016 till July 2017. The study collected and analyzed vehicle and site parameters obtained during crash investigation and injury information acquired from the patient-treating hospital. The investigation team, consisting of two personnel, is stationed at the Emergency Department of the treatment facility, and was dispatched to the crash scene once receiving notification of the related crashes. The injury information retrieved was coded according to the level of severity using the Abbreviated Injury Scale (AIS) and classified into different body regions. The data revealed that weekend crashes were significantly higher for the night time period and the crash occurrence was the highest during morning hours (commuting to work period) for weekdays. Bad weather conditions play a minimal effect towards the occurrence of motorcycle – passenger vehicle crashes and nearly 90% involved motorcycles with single riders. Riders up to 25 years old are heavily involved in crashes with passenger vehicles (60%), followed by 26-55 year age group with 35%. Male riders were dominant in each of the age segments. The majority of the crashes involved side impacts, followed by rear impacts and cars outnumbered the rest of the passenger vehicle types in terms of crash involvement with motorcycles. The investigation data also revealed that passenger vehicles were the most at-fault counterpart (62%) when involved in crashes with motorcycles and most of the crashes involved situations whereby both of the vehicles are travelling in the same direction and one of the vehicles is in a turning maneuver. More than 80% of the involved motorcycle riders had sustained yellow severity level during triage process. The study also found that nearly 30% of the riders sustained injuries to the lower extremities, while MAIS level 3 injuries were recorded for all body regions except for thorax region. The result showed that crashes in which the motorcycles were found to be at fault were more likely to occur during night and raining conditions. These types of crashes were also found to be more likely to involve other types of passenger vehicles rather than cars and possess higher likelihood in resulting higher ISS (>6) value to the involved rider. To reduce motorcycle fatalities, it first has to understand the characteristics concerned and focus may be given on crashes involving passenger vehicles as the most dominant crash partner on Malaysian roads.

Keywords: Motorcycle crash, passenger vehicle, in-depth crash investigation, injury mechanism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1058
3 A Comprehensive Key Performance Indicators Dashboard for Emergency Medical Services

Authors: G. Feletti, D. Tedesco, P. Trucco

Abstract:

The present study aims to develop a dashboard of Key Performance Indicators (KPI) to enhance information and predictive capabilities in Emergency Medical Services (EMS) systems, supporting both operational and strategic decisions of different actors. The employed research methodology consists of a first phase of revision of the technical-scientific literature concerning the indicators currently in use for the performance measurement of EMS. It emerges that current studies focus on two distinct areas and independent objectives: the ambulance service, a fundamental component of pre-hospital health treatment, and the patient care in the Emergency Department (ED). Conversely, the perspective proposed by this study is to consider an integrated view of the ambulance service process and the ED process, both essential to ensure high quality of care and patient safety. Thus, the proposal covers the end-to-end healthcare service process and, as such, allows considering the interconnection between the two EMS processes, the pre-hospital and hospital ones, connected by the assignment of the patient to a specific ED. In this way, it is possible to optimize the entire patient management. Therefore, attention is paid even to EMS aspects that in current literature tend to be neglected or underestimated. In particular, the integration of the two processes enables to evaluate the advantage of an ED selection decision having visibility on EDs’ saturation status and therefore considering, besides the distance, the available resources and the expected waiting times. Starting from a critical review of the KPIs proposed in extant literature, the design of the dashboard was carried out: the high number of analyzed KPIs was reduced by eliminating firstly the ones not in line with the aim of the study and then the ones supporting a similar functionality. The KPIs finally selected were tested on a realistic dataset, which draw us to exclude additional indicators due to unavailability of data required for their computation. The final dashboard, that was discussed and validated by experts in the field, includes a variety of KPIs able to support operational and planning decisions, early warning, and citizens’ awareness on EDs accessibility in real time. The association of each KPI to the EMS phase it refers to enabled the design of a well-balanced dashboard, covering both efficiency and effectiveness performance objectives of the entire EMS process. Indeed, just the initial phases related to the interconnection between ambulance service and patient care are covered by traditional KPIs. Future developments could be directed to building a hierarchical dashboard, composed by a high-level minimal set of KPIs for measuring the basic performance of the EMS system, at an aggregate level, and lower levels of KPIs that bring additional and more detailed information on specific performance dimensions or EMS phases.

Keywords: Emergency Medical Services, Key Performance Indicators, Dashboard, Decision Support.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 386
2 Use of Curcumin in Radiochemotherapy Induced Oral Mucositis Patients: A Control Trial Study

Authors: Shivayogi Charantimath

Abstract:

Radiotherapy and chemotherapy are effective for treating malignancies but are associated with side effects like oral mucositis. Chlorhexidine gluconate is one of the most commonly used mouthwash in prevention of signs and symptoms of mucositis. Evidence shows that chlorhexidine gluconate has side effects in terms of colonization of bacteria, bad breadth and less healing properties. Thus, it is essential to find a suitable alternative therapy which is more effective with minimal side effects. Curcumin, an extract of turmeric is gradually being studied for its wide-ranging therapeutic properties such as antioxidant, analgesic, anti-inflammatory, antitumor, antimicrobial, antiseptic, chemo sensitizing and radio sensitizing properties. The present study was conducted to evaluate the efficacy and safety of topical curcumin gel on radio-chemotherapy induced oral mucositis in cancer patients. The aim of the study is to evaluate the efficacy and safety of curcumin gel in the management of oral mucositis in cancer patients undergoing radio chemotherapy and compare with chlorhexidine. The study was conducted in K.L.E. Society’s Belgaum cancer hospital. 40 oral cancer patients undergoing the radiochemotheraphy with oral mucositis was selected and randomly divided into two groups of 20 each. The study group A [20 patients] was advised Cure next gel for 2 weeks. The control group B [20 patients] was advised chlorhexidine gel for 2 weeks. The NRS, Oral Mucositis Assessment scale and WHO mucositis scale were used to determine the grading. The results obtained were calculated by using SPSS 20 software. The comparison of grading was done by applying Mann-Whitney U test and intergroup comparison was calculated by Wilcoxon matched pairs test. The NRS scores observed from baseline to 1st and 2nd week follow up in both the group showed significant difference. The percentage of change in erythema in respect to group A was 63.3% for first week and for second week, changes were 100.0% with p = 0.0003. The changes in Group A in respect to erythema was 34.6% for 1st week and 57.7% in second week. The intergroup comparison was significant with p value of 0.0048 and 0.0006 in relation to group A and group B respectively. The size of the ulcer score was measured which showed 35.5% [P=0.0010] of change in Group A for 1st and 2nd week showed totally reduction i.e. 103.4% [P=0.0001]. Group B showed 24.7% change from baseline to 1st week and 53.6% for 2nd week follow up. The intergroup comparison with Wilcoxon matched pair test was significant with p=0.0001 in group A. The result obtained by WHO mucositis score in respect to group A shows 29.6% [p=0.0004] change in first week and 75.0% [p=0.0180] change in second week which is highly significant in comparison to group B. Group B showed minimum changes i.e. 20.1% in 1st week and 33.3% in 2nd week. The p value with Wilcoxon was significant with 0.0025 in Group A for 1st week follow up and 0.000 for 2nd week follow up. Curcumin gel appears to an effective and safer alternative to chlorhexidine gel in treatment of oral mucositis.

Keywords: Curcumin, chemotherapy, mucositis, radiotherapy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2079