Search results for: retention time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18294

Search results for: retention time

14124 Investigation of an Alkanethiol Modified Au Electrode as Sensor for the Antioxidant Activity of Plant Compounds

Authors: Dana A. Thal, Heike Kahlert, Fritz Scholz

Abstract:

Thiol molecules are known to easily form self-assembled monolayers (SAM) on Au surfaces. Depending on the thiol’s structure, surface modifications via SAM can be used for electrode sensor development. In the presented work, 1-decanethiol coated polycrystalline Au electrodes were applied to indirectly assess the radical scavenging potential of plant compounds and extracts. Different plant compounds with reported antioxidant properties as well as an extract from the plant Gynostemma pentaphyllum were tested for their effectiveness to prevent SAM degradation on the sensor electrodes via photolytically generated radicals in aqueous media. The SAM degradation was monitored over time by differential pulse voltammetry (DPV) measurements. The results were compared to established antioxidant assays. The obtained data showed an exposure time and concentration dependent degradation process of the SAM at the electrode’s surfaces. The tested substances differed in their capacity to prevent SAM degradation. Calculated radical scavenging activities of the tested plant compounds were different for different assays. The presented method poses a simple system for radical scavenging evaluation and, considering the importance of the test system in antioxidant activity evaluation, might be taken as a bridging tool between in-vivo and in-vitro antioxidant assay in order to obtain more biologically relevant results in antioxidant research.

Keywords: alkanethiol SAM, plant antioxidant, polycrystalline Au, radical scavenger

Procedia PDF Downloads 282
14123 The Effect of Isokinetic Fatigue of Ankle, Knee, and Hip Muscles on the Dynamic Postural Stability Index

Authors: Masoumeh Shojaei, Natalie Gedayloo, Amir Sarshin

Abstract:

The purpose of the present study was to investigate the effect of Isokinetic fatigue of muscles around the ankle, knee, and hip on the indicators of dynamic postural stability. Therefore, 15 female university students (age 19.7± 0.6 years old, weight 54.6± 9.4 kg, and height 163.9± 5.6 cm) participated in within-subjects design for 5 different days. In the first session, the postural stability indices (time to stabilization after jump-landing) without fatigue were assessed by force plate and in each next sessions, one of muscle groups of the lower limb including the muscles around ankles, knees, and hip was randomly exhausted by Biodex Isokinetic dynamometer and the indices were assessed immediately after the fatigue of each muscle group. The method involved landing on a force plate from a dynamic state, and transitioning balance into a static state. Results of ANOVA with repeated measures indicated that there was no significant difference between the time to stabilization (TTS) before and after Isokinetic fatigue of the muscles around the ankle, knee and hip in medial – lateral direction (p > 0.05), but in the anterior – posterior (AP) direction, the difference was statistically significant (p < 0.05). Least Significant Difference (LSD) post hoc test results also showed that there was significant difference between TTS in knee and hip muscles before and after isokinetic fatigue in AP direction. In the other hand knee and hip muscles group were affected by isokinetic fatigue only in AP surface (p < 0.05).

Keywords: dynamic balance, fatigue, lower limb muscles, postural control

Procedia PDF Downloads 214
14122 Overview of Multi-Chip Alternatives for 2.5 and 3D Integrated Circuit Packagings

Authors: Ching-Feng Chen, Ching-Chih Tsai

Abstract:

With the size of the transistor gradually approaching the physical limit, it challenges the persistence of Moore’s Law due to the development of the high numerical aperture (high-NA) lithography equipment and other issues such as short channel effects. In the context of the ever-increasing technical requirements of portable devices and high-performance computing, relying on the law continuation to enhance the chip density will no longer support the prospects of the electronics industry. Weighing the chip’s power consumption-performance-area-cost-cycle time to market (PPACC) is an updated benchmark to drive the evolution of the advanced wafer nanometer (nm). The advent of two and half- and three-dimensional (2.5 and 3D)- Very-Large-Scale Integration (VLSI) packaging based on Through Silicon Via (TSV) technology has updated the traditional die assembly methods and provided the solution. This overview investigates the up-to-date and cutting-edge packaging technologies for 2.5D and 3D integrated circuits (ICs) based on the updated transistor structure and technology nodes. The author concludes that multi-chip solutions for 2.5D and 3D IC packagings are feasible to prolong Moore’s Law.

Keywords: moore’s law, high numerical aperture, power consumption-performance-area-cost-cycle time to market, 2.5 and 3D- very-large-scale integration, packaging, through silicon via

Procedia PDF Downloads 105
14121 Identification of Microbial Community in an Anaerobic Reactor Treating Brewery Wastewater

Authors: Abimbola M. Enitan, John O. Odiyo, Feroz M. Swalaha

Abstract:

The study of microbial ecology and their function in anaerobic digestion processes are essential to control the biological processes. This is to know the symbiotic relationship between the microorganisms that are involved in the conversion of complex organic matter in the industrial wastewater to simple molecules. In this study, diversity and quantity of bacterial community in the granular sludge taken from the different compartments of a full-scale upflow anaerobic sludge blanket (UASB) reactor treating brewery wastewater was investigated using polymerase chain reaction (PCR) and real-time quantitative PCR (qPCR). The phylogenetic analysis showed three major eubacteria phyla that belong to Proteobacteria, Firmicutes and Chloroflexi in the full-scale UASB reactor, with different groups populating different compartment. The result of qPCR assay showed high amount of eubacteria with increase in concentration along the reactor’s compartment. This study extends our understanding on the diverse, topological distribution and shifts in concentration of microbial communities in the different compartments of a full-scale UASB reactor treating brewery wastewater. The colonization and the trophic interactions among these microbial populations in reducing and transforming complex organic matter within the UASB reactors were established.

Keywords: bacteria, brewery wastewater, real-time quantitative PCR, UASB reactor

Procedia PDF Downloads 241
14120 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder

Procedia PDF Downloads 277
14119 The Impact of Distributed Epistemologies on Software Engineering

Authors: Thomas Smith

Abstract:

Many hackers worldwide would agree that, had it not been for linear-time theory, the refinement of Byzantine fault tolerance might never have occurred. After years of significant research into extreme programming, we validate the refinement of simulated annealing. Maw, our new framework for unstable theory, is the solution to all of these issues.

Keywords: distributed, software engineering, DNS, DHCP

Procedia PDF Downloads 330
14118 Self-Tuning Dead-Beat PD Controller for Pitch Angle Control of a Bench-Top Helicopter

Authors: H. Mansor, S.B. Mohd-Noor, N. I. Othman, N. Tazali, R. I. Boby

Abstract:

This paper presents an improved robust Proportional Derivative controller for a 3-Degree-of-Freedom (3-DOF) bench-top helicopter by using adaptive methodology. Bench-top helicopter is a laboratory scale helicopter used for experimental purposes which is widely used in teaching laboratory and research. Proportional Derivative controller has been developed for a 3-DOF bench-top helicopter by Quanser. Experiments showed that the transient response of designed PD controller has very large steady state error i.e., 50%, which is very serious. The objective of this research is to improve the performance of existing pitch angle control of PD controller on the bench-top helicopter by integration of PD controller with adaptive controller. Usually standard adaptive controller will produce zero steady state error; however response time to reach desired set point is large. Therefore, this paper proposed an adaptive with deadbeat algorithm to overcome the limitations. The output response that is fast, robust and updated online is expected. Performance comparisons have been performed between the proposed self-tuning deadbeat PD controller and standard PD controller. The efficiency of the self-tuning dead beat controller has been proven from the tests results in terms of faster settling time, zero steady state error and capability of the controller to be updated online.

Keywords: adaptive control, deadbeat control, bench-top helicopter, self-tuning control

Procedia PDF Downloads 304
14117 Evaluation of the Cities Specific Characteristics in the Formation of the Safavid Period Mints

Authors: Mahmood Seyyed, Akram Salehi Heykoei, Hamidreza Safakish Kashani

Abstract:

Among the remaining resource of the past, coins considered as an authentic documents among the most important documentary sources. The coins were minted in a place that called mint. The number and position of the mints in each period reflects the amount of economic power, political security and business growth, which was always fluctuated its position with changing the political and economic condition. Considering that, trade has more growth during the Safavid period than previous ones, the mint also has been in greater importance. It seems the one hand, the growth of economic in Safavid period has a direct link with the number and places of the mints at that time and in the other hand, the mints have been formed in some places because of the specific characteristic of cities and regions. Increasing the number of mints in the north of the country due to the growth of silk trade and in the west and northwest due to the political and commercial relation with Ottoman Empire, also the characteristics such as existence of mines, located in the Silk Road and communication ways, all are the results of this investigation. Accordingly, in this article researcher tries to examine the characteristics that give priority to a city for having mint. With considering that in the various historical periods, the mints were based in the most important cities in terms of political and social, at that time, this article examines the cities specific characteristics in the formation of the mints in Safavid period.

Keywords: documentary sources, coins, mint, city, Safavid

Procedia PDF Downloads 246
14116 A Feasibility Study of Producing Biofuels from Textile Sludge by Torrefaction Technology

Authors: Hua-Shan Tai, Yu-Ting Zeng

Abstract:

In modern and industrial society, enormous amounts of sludge from various of industries are constantly produced; currently, most of the sludge are treated by landfill and incineration. However, both treatments are not ideal because of the limited land for landfill and the secondary pollution caused by incineration. Consequently, treating industrial sludge appropriately has become an urgent issue of environmental protection. In order to solve the problem of the massive sludge, this study uses textile sludge which is the major source of waste sludge in Taiwan as raw material for torrefaction treatments. To investigate the feasibility of producing biofuels from textile sludge by torrefaction, the experiments were conducted with temperatures at 150, 200, 250, 300, and 350°C, with heating rates of 15, 20, 25 and 30°C/min, and with residence time of 30 and 60 minutes. The results revealed that the mass yields after torrefaction were approximately in the range of 54.9 to 93.4%. The energy densification ratios were approximately in the range of 0.84 to 1.10, and the energy yields were approximately in the range of 45.9 to 98.3%. The volumetric densities were approximately in the range of 0.78 to 1.14, and the volumetric energy densities were approximately in the range of 0.65 to 1.18. To sum up, the optimum energy yield (98.3%) can be reached with terminal temperature at 150 °C, heating rate of 20°C/min, and residence time of 30 minutes, and the mass yield, energy densification ratio as well as volumetric energy density were 92.2%, 1.07, and 1.15, respectively. These results indicated that the solid products after torrefaction are easy to preserve, which not only enhance the quality of the product, but also achieve the purpose of developing the material into fuel.

Keywords: biofuel, biomass energy, textile sludge, torrefaction

Procedia PDF Downloads 305
14115 Production and Characterization of Ce3+: Si2N2O Phosphors for White Light-Emitting Diodes

Authors: Alparslan A. Balta, Hilmi Yurdakul, Orkun Tunckan, Servet Turan, Arife Yurdakul

Abstract:

Si2N2O (Sinoite) is an inorganic-based oxynitride material that reveals promising phosphor candidates for white light-emitting diodes (WLEDs). However, there is now limited knowledge to explain the synthesis of Si2N2O for this purpose. Here, to the best of authors’ knowledge, we report the first time the production of Si2N2O based phosphors by CeO2, SiO2, Si3N4 from main starting powders, and Li2O sintering additive through spark plasma sintering (SPS) route. The processing parameters, e.g., pressure, temperature, and sintering time, were optimized to reach the monophase Si2N2O containing samples. The lattice parameter, crystallite size, and amount of formation phases were characterized in detail by X-ray diffraction (XRD). Grain morphology, particle size, and distribution were analyzed by scanning and transmission electron microscopes (SEM and TEM). Cathodoluminescence (CL) in SEM and photoluminescence (PL) analyses were conducted on the samples to determine the excitation, and emission characteristics of Ce3+ activated Si2N2O. Results showed that the Si2N2O phase in a maximum 90% ratio was obtained by sintering for 15 minutes at 1650oC under 30 MPa pressure. Based on the SEM-CL and PL measurements, Ce3+: Si2N2O phosphor shows a broad emission summit between 400-700 nm that corresponds to white light. The present research was supported by TUBITAK under project number 217M667.

Keywords: cerium, oxynitride, phosphors, sinoite, Si₂N₂O

Procedia PDF Downloads 93
14114 Optimization of the Fabrication Process for Particleboards Made from Oil Palm Fronds Blended with Empty Fruit Bunch Using Response Surface Methodology

Authors: Ghazi Faisal Najmuldeen, Wahida Amat-Fadzil, Zulkafli Hassan, Jinan B. Al-Dabbagh

Abstract:

The objective of this study was to evaluate the optimum fabrication process variables to produce particleboards from oil palm fronds (OPF) particles and empty fruit bunch fiber (EFB). Response surface methodology was employed to analyse the effect of hot press temperature (150–190°C); press time (3–7 minutes) and EFB blending ratio (0–40%) on particleboards modulus of rupture, modulus of elasticity, internal bonding, water absorption and thickness swelling. A Box-Behnken experimental design was carried out to develop statistical models used for the optimisation of the fabrication process variables. All factors were found to be statistically significant on particleboards properties. The statistical analysis indicated that all models showed significant fit with experimental results. The optimum particleboards properties were obtained at optimal fabrication process condition; press temperature; 186°C, press time; 5.7 min and EFB / OPF ratio; 30.4%. Incorporating of oil palm frond and empty fruit bunch to produce particleboards has improved the particleboards properties. The OPF–EFB particleboards fabricated at optimized conditions have satisfied the ANSI A208.1–1999 specification for general purpose particleboards.

Keywords: empty fruit bunch fiber, oil palm fronds, particleboards, response surface methodology

Procedia PDF Downloads 203
14113 Reinforcement of Local Law into Government Policy to Address Conflict of Utilization of Sea among Small Fishermen

Authors: Ema Septaria, Muhammad Yamani, N. S. B. Ambarini

Abstract:

The problem begins with the imposition of fine penalties by Ipuh small fishermen for customary fishing vessels encroaching catchment area in the Ipuh, a village in Muko-Muko, Bengkulu, Indonesia. Two main reasons for that are fishermen from out of Ipuh came and fished in Ipuh water using trawl as the gear and the number of fish decrease time by time as a result of irresponsible fishing practice. Such conflict has lasted since long ago. Indonesia Governing laws do not rule the utilization of sea territory by small fishermen that when the conflict appears there is a rechtvacuum on how to solve the conflict and this leads to a chaos in society. In Ipuh itself, there has been a local law in fisheries which they still adhere up to present because they believe holding to the law will keep the fish sustain. This is an empirical legal research with socio legal approach. The results of this study show even though laws do not regulate in detail about the utilization of sea territory by small fishermen, there is an article in Fisheries Act stating fisheries activity has to put attention to local law and community participation. Furthermore, constitution governs that the land, the waters and the natural resources within shall be under the powers of the State and shall be used to the greatest benefit of the people. With the power, Government has to make a policy that reinforces what has been ruled in Ipuh local law. Besides, Bengkulu Governor has to involve Ipuh community directly in managing their fisheries to ensure the fisheries sustainability therein.

Keywords: local law, reinforcement, conflict, sea utilization, small fishermen

Procedia PDF Downloads 293
14112 Applied Actuator Fault Accommodation in Flight Control Systems Using Fault Reconstruction Based FDD and SMC Reconfiguration

Authors: A. Ghodbane, M. Saad, J. F. Boland, C. Thibeault

Abstract:

Historically, actuators’ redundancy was used to deal with faults occurring suddenly in flight systems. This technique was generally expensive, time consuming and involves increased weight and space in the system. Therefore, nowadays, the on-line fault diagnosis of actuators and accommodation plays a major role in the design of avionic systems. These approaches, known as Fault Tolerant Flight Control systems (FTFCs) are able to adapt to such sudden faults while keeping avionics systems lighter and less expensive. In this paper, a (FTFC) system based on the Geometric Approach and a Reconfigurable Flight Control (RFC) are presented. The Geometric approach is used for cosmic ray fault reconstruction, while Sliding Mode Control (SMC) based on Lyapunov stability theory is designed for the reconfiguration of the controller in order to compensate the fault effect. Matlab®/Simulink® simulations are performed to illustrate the effectiveness and robustness of the proposed flight control system against actuators’ faulty signal caused by cosmic rays. The results demonstrate the successful real-time implementation of the proposed FTFC system on a non-linear 6 DOF aircraft model.

Keywords: actuators’ faults, fault detection and diagnosis, fault tolerant flight control, sliding mode control, geometric approach for fault reconstruction, Lyapunov stability

Procedia PDF Downloads 394
14111 Determinants of Aggregate Electricity Consumption in Ghana: A Multivariate Time Series Analysis

Authors: Renata Konadu

Abstract:

In Ghana, electricity has become the main form of energy which all sectors of the economy rely on for their businesses. Therefore, as the economy grows, the demand and consumption of electricity also grow alongside due to the heavy dependence on it. However, since the supply of electricity has not increased to match the demand, there has been frequent power outages and load shedding affecting business performances. To solve this problem and advance policies to secure electricity in Ghana, it is imperative that those factors that cause consumption to increase be analysed by considering the three classes of consumers; residential, industrial and non-residential. The main argument, however, is that, export of electricity to other neighbouring countries should be included in the electricity consumption model and considered as one of the significant factors which can decrease or increase consumption. The author made use of multivariate time series data from 1980-2010 and econometric models such as Ordinary Least Squares (OLS) and Vector Error Correction Model. Findings show that GDP growth, urban population growth, electricity exports and industry value added to GDP were cointegrated. The results also showed that there is unidirectional causality from electricity export and GDP growth and Industry value added to GDP to electricity consumption in the long run. However, in the short run, there was found to be a directional causality among all the variables and electricity consumption. The results have useful implication for energy policy makers especially with regards to electricity consumption, demand, and supply.

Keywords: electricity consumption, energy policy, GDP growth, vector error correction model

Procedia PDF Downloads 418
14110 Tempo-Spatial Pattern of Progress and Disparity in Child Health in Uttar Pradesh, India

Authors: Gudakesh Yadav

Abstract:

Uttar Pradesh is one of the poorest performing states of India in terms of child health. Using data from the three round of NFHS and two rounds of DLHS, this paper attempts to examine tempo-spatial change in child health and care practices in Uttar Pradesh and its regions. Rate-ratio, CI, multivariate, and decomposition analysis has been used for the study. Findings demonstrate that child health care practices have improved over the time in all regions of the state. However; western and southern region registered the lowest progress in child immunization. Nevertheless, there is no decline in prevalence of diarrhea and ARI over the period, and it remains critically high in the western and southern region. These regions also poorly performed in giving ORS, diarrhoea and ARI treatment. Public health services are least preferred for diarrhoea and ARI treatment. Results from decomposition analysis reveal that rural area, mother’s illiteracy and wealth contributed highest to the low utilization of the child health care practices consistently over the period of time. The study calls for targeted intervention for vulnerable children to accelerate child health care service utilization. Poor performing regions should be targeted and routinely monitored on poor child health indicators.

Keywords: Acute Respiratory Infection (ARI), decomposition, diarrhea, inequality, immunization

Procedia PDF Downloads 282
14109 Effective Use of X-Box Kinect in Rehabilitation Centers of Riyadh

Authors: Reem Alshiha, Tanzila Saba

Abstract:

Physical rehabilitation is the process of helping people to recover and be able to go back to their former activities that have been delayed due to external factors such as car accidents, old age and victims of strokes (chronic diseases and accidents, and those related to sport activities).The cost of hiring a personal nurse or driving the patient to and from the hospital could be costly and time-consuming. Also, there are other factors to take into account such as forgetfulness, boredom and lack of motivation. In order to solve this dilemma, some experts came up with rehabilitation software to be used with Microsoft Kinect to help the patients and their families for in-home rehabilitation. In home rehabilitation software is becoming more and more popular, since it is more convenient for all parties affiliated with the patient. In contrast to the other costly market-based systems that have no portability, Microsoft’s Kinect is a portable motion sensor that reads body movements and interprets it. New software development has made rehabilitation games available to be used at home for the convenience of the patient. The game will benefit its users (rehabilitation patients) in saving time and money. There are many software's that are used with the Kinect for rehabilitation, but the software that is chosen in this research is Kinectotherapy. Kinectotherapy software is used for rehabilitation patients in Riyadh clinics to test its acceptance by patients and their physicians. In this study, we used Kinect because it was affordable, portable and easy to access in contrast to expensive market-based motion sensors. This paper explores the importance of in-home rehabilitation by using Kinect with Kinectotherapy software. The software targets both upper and lower limbs, but in this research, the main focus is on upper-limb functionality. However, the in-home rehabilitation is applicable to be used by all patients with motor disability, since the patient must have some self-reliance. The targeted subjects are patients with minor motor impairment that are somewhat independent in their mobility. The presented work is the first to consider the implementation of in-home rehabilitation with real-time feedback to the patient and physician. This research proposes the implementation of in-home rehabilitation in Riyadh, Saudi Arabia. The findings show that most of the patients are interested and motivated in using the in-home rehabilitation system in the future. The main value of the software application is due to these factors: improve patient engagement through stimulating rehabilitation, be a low cost rehabilitation tool and reduce the need for expensive one-to-one clinical contact. Rehabilitation is a crucial treatment that can improve the quality of life and confidence of the patient as well as their self-esteem.

Keywords: x-box, rehabilitation, physical therapy, rehabilitation software, kinect

Procedia PDF Downloads 323
14108 Optimization of Two Quality Characteristics in Injection Molding Processes via Taguchi Methodology

Authors: Joseph C. Chen, Venkata Karthik Jakka

Abstract:

The main objective of this research is to optimize tensile strength and dimensional accuracy in injection molding processes using Taguchi Parameter Design. An L16 orthogonal array (OA) is used in Taguchi experimental design with five control factors at four levels each and with non-controllable factor vibration. A total of 32 experiments were designed to obtain the optimal parameter setting for the process. The optimal parameters identified for the shrinkage are shot volume, 1.7 cubic inch (A4); mold term temperature, 130 ºF (B1); hold pressure, 3200 Psi (C4); injection speed, 0.61 inch3/sec (D2); and hold time of 14 seconds (E2). The optimal parameters identified for the tensile strength are shot volume, 1.7 cubic inch (A4); mold temperature, 160 ºF (B4); hold pressure, 3100 Psi (C3); injection speed, 0.69 inch3/sec (D4); and hold time of 14 seconds (E2). The Taguchi-based optimization framework was systematically and successfully implemented to obtain an adjusted optimal setting in this research. The mean shrinkage of the confirmation runs is 0.0031%, and the tensile strength value was found to be 3148.1 psi. Both outcomes are far better results from the baseline, and defects have been further reduced in injection molding processes.

Keywords: injection molding processes, taguchi parameter design, tensile strength, high-density polyethylene(HDPE)

Procedia PDF Downloads 176
14107 A 1H NMR-Linked PCR Modelling Strategy for Tracking the Fatty Acid Sources of Aldehydic Lipid Oxidation Products in Culinary Oils Exposed to Simulated Shallow-Frying Episodes

Authors: Martin Grootveld, Benita Percival, Sarah Moumtaz, Kerry L. Grootveld

Abstract:

Objectives/Hypotheses: The adverse health effect potential of dietary lipid oxidation products (LOPs) has evoked much clinical interest. Therefore, we employed a 1H NMR-linked Principal Component Regression (PCR) chemometrics modelling strategy to explore relationships between data matrices comprising (1) aldehydic LOP concentrations generated in culinary oils/fats when exposed to laboratory-simulated shallow frying practices, and (2) the prior saturated (SFA), monounsaturated (MUFA) and polyunsaturated fatty acid (PUFA) contents of such frying media (FM), together with their heating time-points at a standard frying temperature (180 oC). Methods: Corn, sunflower, extra virgin olive, rapeseed, linseed, canola, coconut and MUFA-rich algae frying oils, together with butter and lard, were heated according to laboratory-simulated shallow-frying episodes at 180 oC, and FM samples were collected at time-points of 0, 5, 10, 20, 30, 60, and 90 min. (n = 6 replicates per sample). Aldehydes were determined by 1H NMR analysis (Bruker AV 400 MHz spectrometer). The first (dependent output variable) PCR data matrix comprised aldehyde concentration scores vectors (PC1* and PC2*), whilst the second (predictor) one incorporated those from the fatty acid content/heating time variables (PC1-PC4) and their first-order interactions. Results: Structurally complex trans,trans- and cis,trans-alka-2,4-dienals, 4,5-epxy-trans-2-alkenals and 4-hydroxy-/4-hydroperoxy-trans-2-alkenals (group I aldehydes predominantly arising from PUFA peroxidation) strongly and positively loaded on PC1*, whereas n-alkanals and trans-2-alkenals (group II aldehydes derived from both MUFA and PUFA hydroperoxides) strongly and positively loaded on PC2*. PCR analysis of these scores vectors (SVs) demonstrated that PCs 1 (positively-loaded linoleoylglycerols and [linoleoylglycerol]:[SFA] content ratio), 2 (positively-loaded oleoylglycerols and negatively-loaded SFAs), 3 (positively-loaded linolenoylglycerols and [PUFA]:[SFA] content ratios), and 4 (exclusively orthogonal sampling time-points) all powerfully contributed to aldehydic PC1* SVs (p 10-3 to < 10-9), as did all PC1-3 x PC4 interaction ones (p 10-5 to < 10-9). PC2* was also markedly dependent on all the above PC SVs (PC2 > PC1 and PC3), and the interactions of PC1 and PC2 with PC4 (p < 10-9 in each case), but not the PC3 x PC4 contribution. Conclusions: NMR-linked PCR analysis is a valuable strategy for (1) modelling the generation of aldehydic LOPs in heated cooking oils and other FM, and (2) tracking their unsaturated fatty acid (UFA) triacylglycerol sources therein.

Keywords: frying oils, lipid oxidation products, frying episodes, chemometrics, principal component regression, NMR Analysis, cytotoxic/genotoxic aldehydes

Procedia PDF Downloads 155
14106 Segmentation of Liver Using Random Forest Classifier

Authors: Gajendra Kumar Mourya, Dinesh Bhatia, Akash Handique, Sunita Warjri, Syed Achaab Amir

Abstract:

Nowadays, Medical imaging has become an integral part of modern healthcare. Abdominal CT images are an invaluable mean for abdominal organ investigation and have been widely studied in the recent years. Diagnosis of liver pathologies is one of the major areas of current interests in the field of medical image processing and is still an open problem. To deeply study and diagnose the liver, segmentation of liver is done to identify which part of the liver is mostly affected. Manual segmentation of the liver in CT images is time-consuming and suffers from inter- and intra-observer differences. However, automatic or semi-automatic computer aided segmentation of the Liver is a challenging task due to inter-patient Liver shape and size variability. In this paper, we present a technique for automatic segmenting the liver from CT images using Random Forest Classifier. Random forests or random decision forests are an ensemble learning method for classification that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes of the individual trees. After comparing with various other techniques, it was found that Random Forest Classifier provide a better segmentation results with respect to accuracy and speed. We have done the validation of our results using various techniques and it shows above 89% accuracy in all the cases.

Keywords: CT images, image validation, random forest, segmentation

Procedia PDF Downloads 291
14105 Estimation and Comparison of Delay at Signalized Intersections Based on Existing Methods

Authors: Arpita Saha, Satish Chandra, Indrajit Ghosh

Abstract:

Delay implicates the time loss of a traveler while crossing an intersection. Efficiency of traffic operation at signalized intersections is assessed in terms of delay caused to an individual vehicle. Highway Capacity Manual (HCM) method and Webster’s method are the most widely used in India for delay estimation purpose. However, in India, traffic is highly heterogeneous in nature with extremely poor lane discipline. Therefore, to explore best delay estimation technique for Indian condition, a comparison was made. In this study, seven signalized intersections from three different cities where chosen. Data was collected for both during morning and evening peak hours. Only under saturated cycles were considered for this study. Delay was estimated based on the field data. With the help of Simpson’s 1/3 rd rule, delay of under saturated cycles was estimated by measuring the area under the curve of queue length and cycle time. Moreover, the field observed delay was compared with the delay estimated using HCM, Webster, Probabilistic, Taylor’s expansion and Regression methods. The drawbacks of the existing delay estimation methods to be use in Indian heterogeneous traffic conditions were figured out, and best method was proposed. It was observed that direct estimation of delay using field measured data is more accurate than existing conventional and modified methods.

Keywords: delay estimation technique, field delay, heterogeneous traffic, signalised intersection

Procedia PDF Downloads 283
14104 Deformation Severity Prediction in Sewer Pipelines

Authors: Khalid Kaddoura, Ahmed Assad, Tarek Zayed

Abstract:

Sewer pipelines are prone to deterioration over-time. In fact, their deterioration does not follow a fixed downward pattern. This is in fact due to the defects that propagate through their service life. Sewer pipeline defects are categorized into distinct groups. However, the main two groups are the structural and operational defects. By definition, the structural defects influence the structural integrity of the sewer pipelines such as deformation, cracks, fractures, holes, etc. However, the operational defects are the ones that affect the flow of the sewer medium in the pipelines such as: roots, debris, attached deposits, infiltration, etc. Yet, the process for each defect to emerge follows a cause and effect relationship. Deformation, which is the change of the sewer pipeline geometry, is one type of an influencing defect that could be found in many sewer pipelines due to many surrounding factors. This defect could lead to collapse if the percentage exceeds 15%. Therefore, it is essential to predict the deformation percentage before confronting such a situation. Accordingly, this study will predict the percentage of the deformation defect in sewer pipelines adopting the multiple regression analysis. Several factors will be considered in establishing the model, which are expected to influence the defamation defect severity. Besides, this study will construct a time-based curve to understand how the defect would evolve overtime. Thus, this study is expected to be an asset for decision-makers as it will provide informative conclusions about the deformation defect severity. As a result, inspections will be minimized and so the budgets.

Keywords: deformation, prediction, regression analysis, sewer pipelines

Procedia PDF Downloads 166
14103 Dynamic-cognition of Strategic Mineral Commodities; An Empirical Assessment

Authors: Carlos Tapia Cortez, Serkan Saydam, Jeff Coulton, Claude Sammut

Abstract:

Strategic mineral commodities (SMC) both energetic and metals have long been fundamental for human beings. There is a strong and long-run relation between the mineral resources industry and society's evolution, with the provision of primary raw materials, becoming one of the most significant drivers of economic growth. Due to mineral resources’ relevance for the entire economy and society, an understanding of the SMC market behaviour to simulate price fluctuations has become crucial for governments and firms. For any human activity, SMC price fluctuations are affected by economic, geopolitical, environmental, technological and psychological issues, where cognition has a major role. Cognition is defined as the capacity to store information in memory, processing and decision making for problem-solving or human adaptation. Thus, it has a significant role in those systems that exhibit dynamic equilibrium through time, such as economic growth. Cognition allows not only understanding past behaviours and trends in SCM markets but also supports future expectations of demand/supply levels and prices, although speculations are unavoidable. Technological developments may also be defined as a cognitive system. Since the Industrial Revolution, technological developments have had a significant influence on SMC production costs and prices, likewise allowing co-integration between commodities and market locations. It suggests a close relation between structural breaks, technology and prices evolution. SCM prices forecasting have been commonly addressed by econometrics and Gaussian-probabilistic models. Econometrics models may incorporate the relationship between variables; however, they are statics that leads to an incomplete approach of prices evolution through time. Gaussian-probabilistic models may evolve through time; however, price fluctuations are addressed by the assumption of random behaviour and normal distribution which seems to be far from the real behaviour of both market and prices. Random fluctuation ignores the evolution of market events and the technical and temporal relation between variables, giving the illusion of controlled future events. Normal distribution underestimates price fluctuations by using restricted ranges, curtailing decisions making into a pre-established space. A proper understanding of SMC's price dynamics taking into account the historical-cognitive relation between economic, technological and psychological factors over time is fundamental in attempting to simulate prices. The aim of this paper is to discuss the SMC market cognition hypothesis and empirically demonstrate its dynamic-cognitive capacity. Three of the largest and traded SMC's: oil, copper and gold, will be assessed to examine the economic, technological and psychological cognition respectively.

Keywords: commodity price simulation, commodity price uncertainties, dynamic-cognition, dynamic systems

Procedia PDF Downloads 439
14102 Simulation IDM for Schedule Generation of Slip-Form Operations

Authors: Hesham A. Khalek, Shafik S. Khoury, Remon F. Aziz, Mohamed A. Hakam

Abstract:

Slipforming operation’s linearity is a source of planning complications, and operation is usually subjected to bottlenecks at any point, so careful planning is required in order to achieve success. On the other hand, Discrete-event simulation concepts can be applied to simulate and analyze construction operations and to efficiently support construction scheduling. Nevertheless, preparation of input data for construction simulation is very challenging, time-consuming and human prone-error source. Therefore, to enhance the benefits of using DES in construction scheduling, this study proposes an integrated module to establish a framework for automating the generation of time schedules and decision support for Slipform construction projects, particularly through the project feasibility study phase by using data exchange between project data stored in an Intermediate database, DES and Scheduling software. Using the stored information, proposed system creates construction tasks attribute [e.g. activities durations, material quantities and resources amount], then DES uses all the given information to create a proposal for the construction schedule automatically. This research is considered a demonstration of a flexible Slipform project modeling, rapid scenario-based planning and schedule generation approach that may be of interest to both practitioners and researchers.

Keywords: discrete-event simulation, modeling, construction planning, data exchange, scheduling generation, EZstrobe

Procedia PDF Downloads 357
14101 Evaluation of Hancornia speciosa Gomes Lyophilization at Different Stages of Maturation

Authors: D. C. Soares, J. T. S. Santos, D. G. Costa, A. K. S. Abud, T. P. Nunes, A. V. D. Figueiredo, A. M. de Oliveira Junior

Abstract:

Mangabeira (Hancornia speciosa Gomes), a native plant in Brazil, is found growing spontaneously in various regions of the country. The high perishability of tropical fruits such as mangaba, causes it to be necessary to use technologies that promote conservation, aiming to increase the shelf life of this fruit and add value. The objective of this study was to compare the mangabas lyophilisation curves behaviours with different sizes and maturation stages. The fruits were freeze-dried for a period of approximately 45 hours at lyophilizer Liotop brand, model L -108. It has been considered large the fruits between 38 and 58 mm diameter and small, between 23 and 28 mm diameter and the two states of maturation, intermediate and mature. Large size mangabas drying curves in both states of maturation were linear behaviour at all process, while the kinetic drying curves related to small fruits, independent of maturation state, had a typical behaviour of drying, with all the well-defined steps. With these results it was noted that the time of lyophilisation was suitable for small mangabas, a fact that did not happen with the larger one. This may indicate that the large mangabas require a longer time to freeze until reaches the equilibrium level, as it happens with the small fruits, going to have constant moisture at the end of the process. For both types of fruit were analysed water activity, acidity, protein, lipid, and vitamin C before and after the process.

Keywords: freeze dryer, mangaba, conservation, chemical characteristics

Procedia PDF Downloads 280
14100 Transient Level in the Surge Chamber at the Robert-bourassa Generating Station

Authors: Maryam Kamali Nezhad

Abstract:

The Robert-Bourassa development (LG-2), the first to be built on the Grande Rivière, comprises two sets of eight turbines- generator units each, the East and West powerhouses. Each powerhouse has two tailrace tunnels with an average length of about 1178 m. The LG-2A powerhouse houses 6 turbine-generator units. The water is discharged through two tailrace tunnels with a length of about 1330 m. The objective of this work, at RB (LG-2), is; 1) to establish a new maximum transient level in the surge chamber, 2) to define the new maximum equipment flow rate for the future turbine-generator units, 3) to ensure safe access to various intervention locations in the surge chamber. The transient levels under normal operating conditions at the RB plant were determined in 2001 by the Hydraulics Unit of HQE using the "Chamber" software. It is a one-dimensional mass oscillation calculation software; it is used to determine the variation of the water level in the equilibrium chamber located downstream of a power plant during the load shedding of the power plant units; it can also be used in the case of an equilibrium stack upstream of a power plant. The RB (LG-2) plant study is based on the theoretical nominal geometry of the chamber and the tailrace tunnels and the flow-level relationship at the outlet of the galleries established during design. The software is used in such a way that the results have an acceptable margin of safety, especially with respect to the maximum transient level (e.g., resumption of flow at an inopportune time), to take into account the turbulent and three-dimensional aspects of the actual flow in the chamber. Note that the transient levels depend on the water levels in the river and in the steady-state equilibrium chambers. These data are established in the HQP CRP database and updated from time to time. The maximum transient levels in the RB-East and RB-West powerhouses surge chamber were revised based on the latest update (set 4) of in-river rating curves and steady-state surge chamber water levels. The results of the revision were also used to update the technical advice on the operating conditions for the aforementioned surge chamber access while considering revisions to the calculated water levels.

Keywords: generating station, surge chamber, maximum transient level, hydroelectric power station, turbine-generator, reservoir

Procedia PDF Downloads 66
14099 Study on Acoustic Source Detection Performance Improvement of Microphone Array Installed on Drones Using Blind Source Separation

Authors: Youngsun Moon, Yeong-Ju Go, Jong-Soo Choi

Abstract:

Most drones that currently have surveillance/reconnaissance missions are basically equipped with optical equipment, but we also need to use a microphone array to estimate the location of the acoustic source. This can provide additional information in the absence of optical equipment. The purpose of this study is to estimate Direction of Arrival (DOA) based on Time Difference of Arrival (TDOA) estimation of the acoustic source in the drone. The problem is that it is impossible to measure the clear target acoustic source because of the drone noise. To overcome this problem is to separate the drone noise and the target acoustic source using Blind Source Separation(BSS) based on Independent Component Analysis(ICA). ICA can be performed assuming that the drone noise and target acoustic source are independent and each signal has non-gaussianity. For maximized non-gaussianity each signal, we use Negentropy and Kurtosis based on probability theory. As a result, we can improve TDOA estimation and DOA estimation of the target source in the noisy environment. We simulated the performance of the DOA algorithm applying BSS algorithm, and demonstrated the simulation through experiment at the anechoic wind tunnel.

Keywords: aeroacoustics, acoustic source detection, time difference of arrival, direction of arrival, blind source separation, independent component analysis, drone

Procedia PDF Downloads 144
14098 Parametric Investigation of Wire-Cut Electric Discharge Machining on Steel ST-37

Authors: Mearg Berhe Gebregziabher

Abstract:

Wire-cut electric discharge machining (WEDM) is one of the advanced machining processes. Due to the development of the current manufacturing sector, there has been no research work done before about the optimization of the process parameters based on the availability of the workpiece of the Steel St-37 material in Ethiopia. Material Removal Rate (MRR) is considered as the experimental response of WCEDM. The main objective of this work is to investigate and optimize the process parameters on machining quality that gives high MRR during machining of Steel St-37. Throughout the investigation, Pulse on Time (TON), Pulse off Time (TOFF) and Velocities of Wire Feed (WR) are used as variable parameters at three different levels, and Wire tension, flow rate, type of dielectric fluid, type of the workpiece and wire material and dielectric flow rate are keeping as constants for each experiment. The Taguchi methodology, as per Taguchi‟ 's standard L9 (3^3) Orthogonal Array (OA), has been carried out to investigate their effects and to predict the optimal combination of process parameters over MRR. Signal to Noise ratio (S/N) and Analysis of Variance (ANOVA) were used to analyze the effect of the parameters and to identify the optimum cutting parameters on MRR. MRR was measured by using the Electronic Balance Model SI-32. The results indicated that the most significant factors for MRR are TOFF, TON and lastly WR. Taguchi analysis shows that, the optimal process parameters combination is A2B2C2, i.e., TON 6μs, TOFF 29μs and WR 2 m/min. At this level, the MRR of 0.414 gram/min has been achieved.

Keywords: ANOVA, MRR, parameter, Taguchi Methode

Procedia PDF Downloads 12
14097 Jewish Law in Israel: State, Law, and Religion

Authors: Yuval Sinai

Abstract:

As part of the historical, religious and cultural heritage of the Jewish people, Jewish law is part of the legal system in Israel, which is a Jewish and democratic state. The proper degree of use of Jewish law in judicial decisions is an issue that crops up in Israeli law from time to time. This was a burning question in the 1980s in the wake of the enactment of the Foundations of Law Act 1980, which declared Jewish heritage a supplementary legal method to Israeli law. The enactment of the Basic Law: Human Dignity and Liberty 1992, which decreed that the basic Israeli legal principles must be interpreted in light of the values of a Jewish and democratic state, marks a significant change in the impact of Judaism in the law created and applied by the courts. Both of these legislative developments revived the initiative to grant a central status to Jewish law within the state law. How should Jewish law be applied in Israel’s secular courts? This is not a simple question. It is not merely a question of identifying the relevant rule of Jewish law or tracing its development from the Talmud to modern times. Nor is it the same as asking how a rabbinic court would handle the issue. It is a matter of delicate judgment to distill out of the often conflicting Jewish law sources a rule that will fit into the existing framework of Israeli law so as to advance a policy that will best promote the interests of Israel’s society. We shall point out the occasional tensions between Jewish religious law and secular law, and introduce opinions as to how reconciliation of the two can best be achieved in light of Jewish legal tradition and in light of the reality in the modern State of Israel.

Keywords: law and politics, law and religion, comparative law, law and society

Procedia PDF Downloads 54
14096 Evaluation of Oxidative Changes in Soybean Oil During Shelf-Life by Physico-Chemical Methods and Headspace-Liquid Phase Microextraction (HS-LPME) Technique

Authors: Maryam Enteshari, Kooshan Nayebzadeh, Abdorreza Mohammadi

Abstract:

In this study, the oxidative stability of soybean oil under different storage temperatures (4 and 25˚C) and during 6-month shelf-life was investigated by various analytical methods and headspace-liquid phase microextraction (HS-LPME) coupled to gas chromatography-mass spectrometry (GC-MS). Oxidation changes were monitored by analytical parameters consisted of acid value (AV), peroxide value (PV), p-Anisidine value (p-AV), thiobarbituric acid value (TBA), fatty acids profile, iodine value (IV), and oxidative stability index (OSI). In addition, concentrations of hexanal and heptanal as secondary volatile oxidation compounds were determined by HS-LPME/GC-MS technique. Rate of oxidation in soybean oil which stored at 25˚C was so higher. The AV, p-AV, and TBA were gradually increased during 6 months while the amount of unsaturated fatty acids, IV, and OSI decreased. Other parameters included concentrations of both hexanal and heptanal, and PV exhibited increasing trend during primitive months of storage; then, at the end of third and fourth months a sudden decrement was understood for the concentrations of hexanal and heptanal and the amount of PV, simultaneously. The latter parameters increased again until the end of shelf-time. As a result, the temperature and time were effective factors in oxidative stability of soybean oil. Also intensive correlations were found for soybean oil at 4 ˚C between AV and TBA (r2=0.96), PV and p-AV (r2=0.9), IV and TBA (-r2=0.9), and for soybean oil stored at 4˚C between p-AV and TBA (r2=0.99).

Keywords: headspace-liquid phase microextraction, oxidation, shelf-life, soybean oil

Procedia PDF Downloads 381
14095 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman

Abstract:

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.

Keywords: autonomous surveillance, Bayesian reasoning, decision support, interventions, patterns of life, predictive analytics, predictive insights

Procedia PDF Downloads 102