Search results for: real time kinematics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20497

Search results for: real time kinematics

13807 The Effect of Isokinetic Fatigue of Ankle, Knee, and Hip Muscles on the Dynamic Postural Stability Index

Authors: Masoumeh Shojaei, Natalie Gedayloo, Amir Sarshin

Abstract:

The purpose of the present study was to investigate the effect of Isokinetic fatigue of muscles around the ankle, knee, and hip on the indicators of dynamic postural stability. Therefore, 15 female university students (age 19.7± 0.6 years old, weight 54.6± 9.4 kg, and height 163.9± 5.6 cm) participated in within-subjects design for 5 different days. In the first session, the postural stability indices (time to stabilization after jump-landing) without fatigue were assessed by force plate and in each next sessions, one of muscle groups of the lower limb including the muscles around ankles, knees, and hip was randomly exhausted by Biodex Isokinetic dynamometer and the indices were assessed immediately after the fatigue of each muscle group. The method involved landing on a force plate from a dynamic state, and transitioning balance into a static state. Results of ANOVA with repeated measures indicated that there was no significant difference between the time to stabilization (TTS) before and after Isokinetic fatigue of the muscles around the ankle, knee and hip in medial – lateral direction (p > 0.05), but in the anterior – posterior (AP) direction, the difference was statistically significant (p < 0.05). Least Significant Difference (LSD) post hoc test results also showed that there was significant difference between TTS in knee and hip muscles before and after isokinetic fatigue in AP direction. In the other hand knee and hip muscles group were affected by isokinetic fatigue only in AP surface (p < 0.05).

Keywords: dynamic balance, fatigue, lower limb muscles, postural control

Procedia PDF Downloads 230
13806 A Hybrid Feature Selection Algorithm with Neural Network for Software Fault Prediction

Authors: Khalaf Khatatneh, Nabeel Al-Milli, Amjad Hudaib, Monther Ali Tarawneh

Abstract:

Software fault prediction identify potential faults in software modules during the development process. In this paper, we present a novel approach for software fault prediction by combining a feedforward neural network with particle swarm optimization (PSO). The PSO algorithm is employed as a feature selection technique to identify the most relevant metrics as inputs to the neural network. Which enhances the quality of feature selection and subsequently improves the performance of the neural network model. Through comprehensive experiments on software fault prediction datasets, the proposed hybrid approach achieves better results, outperforming traditional classification methods. The integration of PSO-based feature selection with the neural network enables the identification of critical metrics that provide more accurate fault prediction. Results shows the effectiveness of the proposed approach and its potential for reducing development costs and effort by detecting faults early in the software development lifecycle. Further research and validation on diverse datasets will help solidify the practical applicability of the new approach in real-world software engineering scenarios.

Keywords: feature selection, neural network, particle swarm optimization, software fault prediction

Procedia PDF Downloads 80
13805 Overview of Multi-Chip Alternatives for 2.5 and 3D Integrated Circuit Packagings

Authors: Ching-Feng Chen, Ching-Chih Tsai

Abstract:

With the size of the transistor gradually approaching the physical limit, it challenges the persistence of Moore’s Law due to the development of the high numerical aperture (high-NA) lithography equipment and other issues such as short channel effects. In the context of the ever-increasing technical requirements of portable devices and high-performance computing, relying on the law continuation to enhance the chip density will no longer support the prospects of the electronics industry. Weighing the chip’s power consumption-performance-area-cost-cycle time to market (PPACC) is an updated benchmark to drive the evolution of the advanced wafer nanometer (nm). The advent of two and half- and three-dimensional (2.5 and 3D)- Very-Large-Scale Integration (VLSI) packaging based on Through Silicon Via (TSV) technology has updated the traditional die assembly methods and provided the solution. This overview investigates the up-to-date and cutting-edge packaging technologies for 2.5D and 3D integrated circuits (ICs) based on the updated transistor structure and technology nodes. The author concludes that multi-chip solutions for 2.5D and 3D IC packagings are feasible to prolong Moore’s Law.

Keywords: moore’s law, high numerical aperture, power consumption-performance-area-cost-cycle time to market, 2.5 and 3D- very-large-scale integration, packaging, through silicon via

Procedia PDF Downloads 112
13804 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder

Procedia PDF Downloads 285
13803 The Impact of Distributed Epistemologies on Software Engineering

Authors: Thomas Smith

Abstract:

Many hackers worldwide would agree that, had it not been for linear-time theory, the refinement of Byzantine fault tolerance might never have occurred. After years of significant research into extreme programming, we validate the refinement of simulated annealing. Maw, our new framework for unstable theory, is the solution to all of these issues.

Keywords: distributed, software engineering, DNS, DHCP

Procedia PDF Downloads 346
13802 Application of Mobile Aluminium Light Structure Housing System in Sustainable Building Process

Authors: Wang Haining, Zhang Hong

Abstract:

In China, rapid urbanization needs more and more buildings constructed for the growing population in cities. With the help of the methodology which contains investigation, contrastive analysis, design based on component with BIM and experiment before real construction, this research based on mobile light structure system, trying to the sustainable problems partly in present China by systematic study. The system cannot replace the permanent heavy structure completely. So the goal is the improvement of the whole building system by the addition of light structure. This house system uses modularized envelopes and standardized connections, which are pre-fabricated and assembled in factories and transported like containers. Aluminum is used as the structural material in this system, and inorganic thermal insulation material used in the envelope, which have high fireproof properties. The relationship between manufactory and construction of the system is progressive hierarchy. They exist as First Industrial, Second Industrial, Third Industrial and Site Assembly Stage. It could maximize the land usage capacity by fully exploit the area where normal permanent architecture can't take advantage of. Not only the building system itself especially the thermal isolated materials used and active solar photovoltaic system equipped can save energy, but also the way of product development is sustainable.

Keywords: aluminum house, light Structure, rapid assembly, repeat construction

Procedia PDF Downloads 484
13801 An Analysis of the Temporal Aspects of Visual Attention Processing Using Rapid Series Visual Processing (RSVP) Data

Authors: Shreya Borthakur, Aastha Vartak

Abstract:

This Electroencephalogram (EEG) project on Rapid Visual Serial Processing (RSVP) paradigm explores the temporal dynamics of visual attention processing in response to rapidly presented visual stimuli. The study builds upon previous research that used real-world images in RSVP tasks to understand the emergence of object representations in the human brain. The objectives of the research include investigating the differences in accuracy and reaction times between 5 Hz and 20 Hz presentation rates, as well as examining the prominent brain waves, particularly alpha and beta waves, associated with the attention task. The pre-processing and data analysis involves filtering EEG data, creating epochs for target stimuli, and conducting statistical tests using MATLAB, EEGLAB, Chronux toolboxes, and R. The results support the hypotheses, revealing higher accuracy at a slower presentation rate, faster reaction times for less complex targets, and the involvement of alpha and beta waves in attention and cognitive processing. This research sheds light on how short-term memory and cognitive control affect visual processing and could have practical implications in fields like education.

Keywords: RSVP, attention, visual processing, attentional blink, EEG

Procedia PDF Downloads 64
13800 Self-Tuning Dead-Beat PD Controller for Pitch Angle Control of a Bench-Top Helicopter

Authors: H. Mansor, S.B. Mohd-Noor, N. I. Othman, N. Tazali, R. I. Boby

Abstract:

This paper presents an improved robust Proportional Derivative controller for a 3-Degree-of-Freedom (3-DOF) bench-top helicopter by using adaptive methodology. Bench-top helicopter is a laboratory scale helicopter used for experimental purposes which is widely used in teaching laboratory and research. Proportional Derivative controller has been developed for a 3-DOF bench-top helicopter by Quanser. Experiments showed that the transient response of designed PD controller has very large steady state error i.e., 50%, which is very serious. The objective of this research is to improve the performance of existing pitch angle control of PD controller on the bench-top helicopter by integration of PD controller with adaptive controller. Usually standard adaptive controller will produce zero steady state error; however response time to reach desired set point is large. Therefore, this paper proposed an adaptive with deadbeat algorithm to overcome the limitations. The output response that is fast, robust and updated online is expected. Performance comparisons have been performed between the proposed self-tuning deadbeat PD controller and standard PD controller. The efficiency of the self-tuning dead beat controller has been proven from the tests results in terms of faster settling time, zero steady state error and capability of the controller to be updated online.

Keywords: adaptive control, deadbeat control, bench-top helicopter, self-tuning control

Procedia PDF Downloads 316
13799 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine

Authors: Adriana Haulica

Abstract:

Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.

Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics

Procedia PDF Downloads 63
13798 Optimal Evaluation of Weather Risk Insurance for Wheat

Authors: Slim Amami

Abstract:

A model is developed to prevent the risks related to climate conditions in the agricultural sector. It will determine the yearly optimum premium to be paid by a farmer in order to reach his required turnover. The model is mainly based on both climatic stability and 'soft' responses of usually grown species to average climate variations at the same place and inside a safety ball which can be determined from past meteorological data. This allows the use of linear regression expression for dependence of production result in terms of driving meteorological parameters, main ones of which are daily average sunlight, rainfall and temperature. By a simple best parameter fit from the expert table drawn with professionals, optimal representation of yearly production is deduced from records of previous years, and yearly payback is evaluated from minimum yearly produced turnover. Optimal premium is then deduced, and gives the producer a useful bound for negotiating an offer by insurance companies to effectively protect their harvest. The application to wheat production in the French Oise department illustrates the reliability of the present model with as low as 6% difference between predicted and real data. The model can be adapted to almost every agricultural field by changing state parameters and calibrating their associated coefficients.

Keywords: agriculture, database, meteorological factors, production model, optimal price

Procedia PDF Downloads 217
13797 Development and Implementation of An "Electric Island" Monitoring Infrastructure for Promoting Energy Efficiency in Schools

Authors: Vladislav Grigorovitch, Marina Grigorovitch, David Pearlmutter, Erez Gal

Abstract:

The concept of “electric island” is involved with achieving the balance between the self-power generation ability of each educational institution and energy consumption demand. Photo-Voltaic (PV) solar system installed on the roofs of educational buildings is a common way to absorb the available solar energy and generate electricity for self-consumption and even for returning to the grid. The main objective of this research is to develop and implement an “electric island” monitoring infrastructure for promoting energy efficiency in educational buildings. A microscale monitoring methodology will be developed to provide a platform to estimate energy consumption performance classified by rooms and subspaces rather than the more common macroscale monitoring of the whole building. The monitoring platform will be established on the experimental sites, enabling an estimation and further analysis of the variety of environmental and physical conditions. For each building, separate measurement configurations will be applied taking into account the specific requirements, restrictions, location and infrastructure issues. The direct results of the measurements will be analyzed to provide deeper understanding of the impact of environmental conditions and sustainability construction standards, not only on the energy demand of public building, but also on the energy consumption habits of the children that study in those schools and the educational and administrative staff that is responsible for providing the thermal comfort conditions and healthy studying atmosphere for the children. A monitoring methodology being developed in this research is providing online access to real-time data of Interferential Therapy (IFTs) from any mobile phone or computer by simply browsing the dedicated website, providing powerful tools for policy makers for better decision making while developing PV production infrastructure to achieve “electric islands” in educational buildings. A detailed measurement configuration was technically designed based on the specific conditions and restriction of each of the pilot buildings. A monitoring and analysis methodology includes a large variety of environmental parameters inside and outside the schools to investigate the impact of environmental conditions both on the energy performance of the school and educational abilities of the children. Indoor measurements are mandatory to acquire the energy consumption data, temperature, humidity, carbon dioxide and other air quality conditions in different parts of the building. In addition to that, we aim to study the awareness of the users to the energy consideration and thus the impact on their energy consumption habits. The monitoring of outdoor conditions is vital for proper design of the off-grid energy supply system and validation of its sufficient capacity. The suggested outcomes of this research include: 1. both experimental sites are designed to have PV production and storage capabilities; 2. Developing an online information feedback platform. The platform will provide consumer dedicated information to academic researchers, municipality officials and educational staff and students; 3. Designing an environmental work path for educational staff regarding optimal conditions and efficient hours for operating air conditioning, natural ventilation, closing of blinds, etc.

Keywords: sustainability, electric island, IOT, smart building

Procedia PDF Downloads 169
13796 Evaluation of the Cities Specific Characteristics in the Formation of the Safavid Period Mints

Authors: Mahmood Seyyed, Akram Salehi Heykoei, Hamidreza Safakish Kashani

Abstract:

Among the remaining resource of the past, coins considered as an authentic documents among the most important documentary sources. The coins were minted in a place that called mint. The number and position of the mints in each period reflects the amount of economic power, political security and business growth, which was always fluctuated its position with changing the political and economic condition. Considering that, trade has more growth during the Safavid period than previous ones, the mint also has been in greater importance. It seems the one hand, the growth of economic in Safavid period has a direct link with the number and places of the mints at that time and in the other hand, the mints have been formed in some places because of the specific characteristic of cities and regions. Increasing the number of mints in the north of the country due to the growth of silk trade and in the west and northwest due to the political and commercial relation with Ottoman Empire, also the characteristics such as existence of mines, located in the Silk Road and communication ways, all are the results of this investigation. Accordingly, in this article researcher tries to examine the characteristics that give priority to a city for having mint. With considering that in the various historical periods, the mints were based in the most important cities in terms of political and social, at that time, this article examines the cities specific characteristics in the formation of the mints in Safavid period.

Keywords: documentary sources, coins, mint, city, Safavid

Procedia PDF Downloads 259
13795 A Feasibility Study of Producing Biofuels from Textile Sludge by Torrefaction Technology

Authors: Hua-Shan Tai, Yu-Ting Zeng

Abstract:

In modern and industrial society, enormous amounts of sludge from various of industries are constantly produced; currently, most of the sludge are treated by landfill and incineration. However, both treatments are not ideal because of the limited land for landfill and the secondary pollution caused by incineration. Consequently, treating industrial sludge appropriately has become an urgent issue of environmental protection. In order to solve the problem of the massive sludge, this study uses textile sludge which is the major source of waste sludge in Taiwan as raw material for torrefaction treatments. To investigate the feasibility of producing biofuels from textile sludge by torrefaction, the experiments were conducted with temperatures at 150, 200, 250, 300, and 350°C, with heating rates of 15, 20, 25 and 30°C/min, and with residence time of 30 and 60 minutes. The results revealed that the mass yields after torrefaction were approximately in the range of 54.9 to 93.4%. The energy densification ratios were approximately in the range of 0.84 to 1.10, and the energy yields were approximately in the range of 45.9 to 98.3%. The volumetric densities were approximately in the range of 0.78 to 1.14, and the volumetric energy densities were approximately in the range of 0.65 to 1.18. To sum up, the optimum energy yield (98.3%) can be reached with terminal temperature at 150 °C, heating rate of 20°C/min, and residence time of 30 minutes, and the mass yield, energy densification ratio as well as volumetric energy density were 92.2%, 1.07, and 1.15, respectively. These results indicated that the solid products after torrefaction are easy to preserve, which not only enhance the quality of the product, but also achieve the purpose of developing the material into fuel.

Keywords: biofuel, biomass energy, textile sludge, torrefaction

Procedia PDF Downloads 316
13794 Production and Characterization of Ce3+: Si2N2O Phosphors for White Light-Emitting Diodes

Authors: Alparslan A. Balta, Hilmi Yurdakul, Orkun Tunckan, Servet Turan, Arife Yurdakul

Abstract:

Si2N2O (Sinoite) is an inorganic-based oxynitride material that reveals promising phosphor candidates for white light-emitting diodes (WLEDs). However, there is now limited knowledge to explain the synthesis of Si2N2O for this purpose. Here, to the best of authors’ knowledge, we report the first time the production of Si2N2O based phosphors by CeO2, SiO2, Si3N4 from main starting powders, and Li2O sintering additive through spark plasma sintering (SPS) route. The processing parameters, e.g., pressure, temperature, and sintering time, were optimized to reach the monophase Si2N2O containing samples. The lattice parameter, crystallite size, and amount of formation phases were characterized in detail by X-ray diffraction (XRD). Grain morphology, particle size, and distribution were analyzed by scanning and transmission electron microscopes (SEM and TEM). Cathodoluminescence (CL) in SEM and photoluminescence (PL) analyses were conducted on the samples to determine the excitation, and emission characteristics of Ce3+ activated Si2N2O. Results showed that the Si2N2O phase in a maximum 90% ratio was obtained by sintering for 15 minutes at 1650oC under 30 MPa pressure. Based on the SEM-CL and PL measurements, Ce3+: Si2N2O phosphor shows a broad emission summit between 400-700 nm that corresponds to white light. The present research was supported by TUBITAK under project number 217M667.

Keywords: cerium, oxynitride, phosphors, sinoite, Si₂N₂O

Procedia PDF Downloads 102
13793 Optimization of the Fabrication Process for Particleboards Made from Oil Palm Fronds Blended with Empty Fruit Bunch Using Response Surface Methodology

Authors: Ghazi Faisal Najmuldeen, Wahida Amat-Fadzil, Zulkafli Hassan, Jinan B. Al-Dabbagh

Abstract:

The objective of this study was to evaluate the optimum fabrication process variables to produce particleboards from oil palm fronds (OPF) particles and empty fruit bunch fiber (EFB). Response surface methodology was employed to analyse the effect of hot press temperature (150–190°C); press time (3–7 minutes) and EFB blending ratio (0–40%) on particleboards modulus of rupture, modulus of elasticity, internal bonding, water absorption and thickness swelling. A Box-Behnken experimental design was carried out to develop statistical models used for the optimisation of the fabrication process variables. All factors were found to be statistically significant on particleboards properties. The statistical analysis indicated that all models showed significant fit with experimental results. The optimum particleboards properties were obtained at optimal fabrication process condition; press temperature; 186°C, press time; 5.7 min and EFB / OPF ratio; 30.4%. Incorporating of oil palm frond and empty fruit bunch to produce particleboards has improved the particleboards properties. The OPF–EFB particleboards fabricated at optimized conditions have satisfied the ANSI A208.1–1999 specification for general purpose particleboards.

Keywords: empty fruit bunch fiber, oil palm fronds, particleboards, response surface methodology

Procedia PDF Downloads 216
13792 Reinforcement of Local Law into Government Policy to Address Conflict of Utilization of Sea among Small Fishermen

Authors: Ema Septaria, Muhammad Yamani, N. S. B. Ambarini

Abstract:

The problem begins with the imposition of fine penalties by Ipuh small fishermen for customary fishing vessels encroaching catchment area in the Ipuh, a village in Muko-Muko, Bengkulu, Indonesia. Two main reasons for that are fishermen from out of Ipuh came and fished in Ipuh water using trawl as the gear and the number of fish decrease time by time as a result of irresponsible fishing practice. Such conflict has lasted since long ago. Indonesia Governing laws do not rule the utilization of sea territory by small fishermen that when the conflict appears there is a rechtvacuum on how to solve the conflict and this leads to a chaos in society. In Ipuh itself, there has been a local law in fisheries which they still adhere up to present because they believe holding to the law will keep the fish sustain. This is an empirical legal research with socio legal approach. The results of this study show even though laws do not regulate in detail about the utilization of sea territory by small fishermen, there is an article in Fisheries Act stating fisheries activity has to put attention to local law and community participation. Furthermore, constitution governs that the land, the waters and the natural resources within shall be under the powers of the State and shall be used to the greatest benefit of the people. With the power, Government has to make a policy that reinforces what has been ruled in Ipuh local law. Besides, Bengkulu Governor has to involve Ipuh community directly in managing their fisheries to ensure the fisheries sustainability therein.

Keywords: local law, reinforcement, conflict, sea utilization, small fishermen

Procedia PDF Downloads 307
13791 Combination of Modelling and Environmental Life Cycle Assessment Approach for Demand Driven Biogas Production

Authors: Juan A. Arzate, Funda C. Ertem, M. Nicolas Cruz-Bournazou, Peter Neubauer, Stefan Junne

Abstract:

— One of the biggest challenges the world faces today is global warming that is caused by greenhouse gases (GHGs) coming from the combustion of fossil fuels for energy generation. In order to mitigate climate change, the European Union has committed to reducing GHG emissions to 80–95% below the level of the 1990s by the year 2050. Renewable technologies are vital to diminish energy-related GHG emissions. Since water and biomass are limited resources, the largest contributions to renewable energy (RE) systems will have to come from wind and solar power. Nevertheless, high proportions of fluctuating RE will present a number of challenges, especially regarding the need to balance the variable energy demand with the weather dependent fluctuation of energy supply. Therefore, biogas plants in this content would play an important role, since they are easily adaptable. Feedstock availability varies locally or seasonally; however there is a lack of knowledge in how biogas plants should be operated in a stable manner by local feedstock. This problem may be prevented through suitable control strategies. Such strategies require the development of convenient mathematical models, which fairly describe the main processes. Modelling allows us to predict the system behavior of biogas plants when different feedstocks are used with different loading rates. Life cycle assessment (LCA) is a technique for analyzing several sides from evolution of a product till its disposal in an environmental point of view. It is highly recommend to use as a decision making tool. In order to achieve suitable strategies, the combination of a flexible energy generation provided by biogas plants, a secure production process and the maximization of the environmental benefits can be obtained by the combination of process modelling and LCA approaches. For this reason, this study focuses on the biogas plant which flexibly generates required energy from the co-digestion of maize, grass and cattle manure, while emitting the lowest amount of GHG´s. To achieve this goal AMOCO model was combined with LCA. The program was structured in Matlab to simulate any biogas process based on the AMOCO model and combined with the equations necessary to obtain climate change, acidification and eutrophication potentials of the whole production system based on ReCiPe midpoint v.1.06 methodology. Developed simulation was optimized based on real data from operating biogas plants and existing literature research. The results prove that AMOCO model can successfully imitate the system behavior of biogas plants and the necessary time required for the process to adapt in order to generate demanded energy from available feedstock. Combination with LCA approach provided opportunity to keep the resulting emissions from operation at the lowest possible level. This would allow for a prediction of the process, when the feedstock utilization supports the establishment of closed material circles within a smart bio-production grid – under the constraint of minimal drawbacks for the environment and maximal sustainability.

Keywords: AMOCO model, GHG emissions, life cycle assessment, modelling

Procedia PDF Downloads 184
13790 Application of Intelligent City and Hierarchy Intelligent Buildings in Kuala Lumpur

Authors: Jalalludin Abdul Malek, Zurinah Tahir

Abstract:

When the Multimedia Super Corridor (MSC) was launched in 1995, it became the catalyst for the implementation of the intelligent city concept, an area that covers about 15 x 50 kilometres from Kuala Lumpur City Centre (KLCC), Putrajaya and Kuala Lumpur International Airport (KLIA). The concept of intelligent city means that the city has an advanced infrastructure and infostructure such as information technology, advanced telecommunication systems, electronic technology and mechanical technology to be utilized for the development of urban elements such as industries, health, services, transportation and communications. For example, the Golden Triangle of Kuala Lumpur has also many intelligent buildings developed by the private sector such as the KLCC Tower to implement the intelligent city concept. Consequently, the intelligent buildings in the Golden Triangle can be linked directly to the Putrajaya Intelligent City and Cyberjaya Intelligent City within the confines of the MSC. However, the reality of the situation is that there are not many intelligent buildings within the Golden Triangle Kuala Lumpur scope which can be considered of high-standard intelligent buildings as referred to by the Intelligence Quotient (IQ) building standard. This increases the need to implement the real ‘intelligent city’ concept. This paper aims to show the strengths and weaknesses of the intelligent buildings in the Golden Triangle by taking into account aspects of 'intelligence' in the areas of technology and infrastructure of buildings.

Keywords: intelligent city concepts, intelligent building, Golden Triangle, Kuala Lumpur

Procedia PDF Downloads 289
13789 Determinants of Aggregate Electricity Consumption in Ghana: A Multivariate Time Series Analysis

Authors: Renata Konadu

Abstract:

In Ghana, electricity has become the main form of energy which all sectors of the economy rely on for their businesses. Therefore, as the economy grows, the demand and consumption of electricity also grow alongside due to the heavy dependence on it. However, since the supply of electricity has not increased to match the demand, there has been frequent power outages and load shedding affecting business performances. To solve this problem and advance policies to secure electricity in Ghana, it is imperative that those factors that cause consumption to increase be analysed by considering the three classes of consumers; residential, industrial and non-residential. The main argument, however, is that, export of electricity to other neighbouring countries should be included in the electricity consumption model and considered as one of the significant factors which can decrease or increase consumption. The author made use of multivariate time series data from 1980-2010 and econometric models such as Ordinary Least Squares (OLS) and Vector Error Correction Model. Findings show that GDP growth, urban population growth, electricity exports and industry value added to GDP were cointegrated. The results also showed that there is unidirectional causality from electricity export and GDP growth and Industry value added to GDP to electricity consumption in the long run. However, in the short run, there was found to be a directional causality among all the variables and electricity consumption. The results have useful implication for energy policy makers especially with regards to electricity consumption, demand, and supply.

Keywords: electricity consumption, energy policy, GDP growth, vector error correction model

Procedia PDF Downloads 429
13788 Tempo-Spatial Pattern of Progress and Disparity in Child Health in Uttar Pradesh, India

Authors: Gudakesh Yadav

Abstract:

Uttar Pradesh is one of the poorest performing states of India in terms of child health. Using data from the three round of NFHS and two rounds of DLHS, this paper attempts to examine tempo-spatial change in child health and care practices in Uttar Pradesh and its regions. Rate-ratio, CI, multivariate, and decomposition analysis has been used for the study. Findings demonstrate that child health care practices have improved over the time in all regions of the state. However; western and southern region registered the lowest progress in child immunization. Nevertheless, there is no decline in prevalence of diarrhea and ARI over the period, and it remains critically high in the western and southern region. These regions also poorly performed in giving ORS, diarrhoea and ARI treatment. Public health services are least preferred for diarrhoea and ARI treatment. Results from decomposition analysis reveal that rural area, mother’s illiteracy and wealth contributed highest to the low utilization of the child health care practices consistently over the period of time. The study calls for targeted intervention for vulnerable children to accelerate child health care service utilization. Poor performing regions should be targeted and routinely monitored on poor child health indicators.

Keywords: Acute Respiratory Infection (ARI), decomposition, diarrhea, inequality, immunization

Procedia PDF Downloads 293
13787 Optimization of Two Quality Characteristics in Injection Molding Processes via Taguchi Methodology

Authors: Joseph C. Chen, Venkata Karthik Jakka

Abstract:

The main objective of this research is to optimize tensile strength and dimensional accuracy in injection molding processes using Taguchi Parameter Design. An L16 orthogonal array (OA) is used in Taguchi experimental design with five control factors at four levels each and with non-controllable factor vibration. A total of 32 experiments were designed to obtain the optimal parameter setting for the process. The optimal parameters identified for the shrinkage are shot volume, 1.7 cubic inch (A4); mold term temperature, 130 ºF (B1); hold pressure, 3200 Psi (C4); injection speed, 0.61 inch3/sec (D2); and hold time of 14 seconds (E2). The optimal parameters identified for the tensile strength are shot volume, 1.7 cubic inch (A4); mold temperature, 160 ºF (B4); hold pressure, 3100 Psi (C3); injection speed, 0.69 inch3/sec (D4); and hold time of 14 seconds (E2). The Taguchi-based optimization framework was systematically and successfully implemented to obtain an adjusted optimal setting in this research. The mean shrinkage of the confirmation runs is 0.0031%, and the tensile strength value was found to be 3148.1 psi. Both outcomes are far better results from the baseline, and defects have been further reduced in injection molding processes.

Keywords: injection molding processes, taguchi parameter design, tensile strength, high-density polyethylene(HDPE)

Procedia PDF Downloads 189
13786 A 1H NMR-Linked PCR Modelling Strategy for Tracking the Fatty Acid Sources of Aldehydic Lipid Oxidation Products in Culinary Oils Exposed to Simulated Shallow-Frying Episodes

Authors: Martin Grootveld, Benita Percival, Sarah Moumtaz, Kerry L. Grootveld

Abstract:

Objectives/Hypotheses: The adverse health effect potential of dietary lipid oxidation products (LOPs) has evoked much clinical interest. Therefore, we employed a 1H NMR-linked Principal Component Regression (PCR) chemometrics modelling strategy to explore relationships between data matrices comprising (1) aldehydic LOP concentrations generated in culinary oils/fats when exposed to laboratory-simulated shallow frying practices, and (2) the prior saturated (SFA), monounsaturated (MUFA) and polyunsaturated fatty acid (PUFA) contents of such frying media (FM), together with their heating time-points at a standard frying temperature (180 oC). Methods: Corn, sunflower, extra virgin olive, rapeseed, linseed, canola, coconut and MUFA-rich algae frying oils, together with butter and lard, were heated according to laboratory-simulated shallow-frying episodes at 180 oC, and FM samples were collected at time-points of 0, 5, 10, 20, 30, 60, and 90 min. (n = 6 replicates per sample). Aldehydes were determined by 1H NMR analysis (Bruker AV 400 MHz spectrometer). The first (dependent output variable) PCR data matrix comprised aldehyde concentration scores vectors (PC1* and PC2*), whilst the second (predictor) one incorporated those from the fatty acid content/heating time variables (PC1-PC4) and their first-order interactions. Results: Structurally complex trans,trans- and cis,trans-alka-2,4-dienals, 4,5-epxy-trans-2-alkenals and 4-hydroxy-/4-hydroperoxy-trans-2-alkenals (group I aldehydes predominantly arising from PUFA peroxidation) strongly and positively loaded on PC1*, whereas n-alkanals and trans-2-alkenals (group II aldehydes derived from both MUFA and PUFA hydroperoxides) strongly and positively loaded on PC2*. PCR analysis of these scores vectors (SVs) demonstrated that PCs 1 (positively-loaded linoleoylglycerols and [linoleoylglycerol]:[SFA] content ratio), 2 (positively-loaded oleoylglycerols and negatively-loaded SFAs), 3 (positively-loaded linolenoylglycerols and [PUFA]:[SFA] content ratios), and 4 (exclusively orthogonal sampling time-points) all powerfully contributed to aldehydic PC1* SVs (p 10-3 to < 10-9), as did all PC1-3 x PC4 interaction ones (p 10-5 to < 10-9). PC2* was also markedly dependent on all the above PC SVs (PC2 > PC1 and PC3), and the interactions of PC1 and PC2 with PC4 (p < 10-9 in each case), but not the PC3 x PC4 contribution. Conclusions: NMR-linked PCR analysis is a valuable strategy for (1) modelling the generation of aldehydic LOPs in heated cooking oils and other FM, and (2) tracking their unsaturated fatty acid (UFA) triacylglycerol sources therein.

Keywords: frying oils, lipid oxidation products, frying episodes, chemometrics, principal component regression, NMR Analysis, cytotoxic/genotoxic aldehydes

Procedia PDF Downloads 162
13785 Segmentation of Liver Using Random Forest Classifier

Authors: Gajendra Kumar Mourya, Dinesh Bhatia, Akash Handique, Sunita Warjri, Syed Achaab Amir

Abstract:

Nowadays, Medical imaging has become an integral part of modern healthcare. Abdominal CT images are an invaluable mean for abdominal organ investigation and have been widely studied in the recent years. Diagnosis of liver pathologies is one of the major areas of current interests in the field of medical image processing and is still an open problem. To deeply study and diagnose the liver, segmentation of liver is done to identify which part of the liver is mostly affected. Manual segmentation of the liver in CT images is time-consuming and suffers from inter- and intra-observer differences. However, automatic or semi-automatic computer aided segmentation of the Liver is a challenging task due to inter-patient Liver shape and size variability. In this paper, we present a technique for automatic segmenting the liver from CT images using Random Forest Classifier. Random forests or random decision forests are an ensemble learning method for classification that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes of the individual trees. After comparing with various other techniques, it was found that Random Forest Classifier provide a better segmentation results with respect to accuracy and speed. We have done the validation of our results using various techniques and it shows above 89% accuracy in all the cases.

Keywords: CT images, image validation, random forest, segmentation

Procedia PDF Downloads 306
13784 Estimation and Comparison of Delay at Signalized Intersections Based on Existing Methods

Authors: Arpita Saha, Satish Chandra, Indrajit Ghosh

Abstract:

Delay implicates the time loss of a traveler while crossing an intersection. Efficiency of traffic operation at signalized intersections is assessed in terms of delay caused to an individual vehicle. Highway Capacity Manual (HCM) method and Webster’s method are the most widely used in India for delay estimation purpose. However, in India, traffic is highly heterogeneous in nature with extremely poor lane discipline. Therefore, to explore best delay estimation technique for Indian condition, a comparison was made. In this study, seven signalized intersections from three different cities where chosen. Data was collected for both during morning and evening peak hours. Only under saturated cycles were considered for this study. Delay was estimated based on the field data. With the help of Simpson’s 1/3 rd rule, delay of under saturated cycles was estimated by measuring the area under the curve of queue length and cycle time. Moreover, the field observed delay was compared with the delay estimated using HCM, Webster, Probabilistic, Taylor’s expansion and Regression methods. The drawbacks of the existing delay estimation methods to be use in Indian heterogeneous traffic conditions were figured out, and best method was proposed. It was observed that direct estimation of delay using field measured data is more accurate than existing conventional and modified methods.

Keywords: delay estimation technique, field delay, heterogeneous traffic, signalised intersection

Procedia PDF Downloads 294
13783 Deformation Severity Prediction in Sewer Pipelines

Authors: Khalid Kaddoura, Ahmed Assad, Tarek Zayed

Abstract:

Sewer pipelines are prone to deterioration over-time. In fact, their deterioration does not follow a fixed downward pattern. This is in fact due to the defects that propagate through their service life. Sewer pipeline defects are categorized into distinct groups. However, the main two groups are the structural and operational defects. By definition, the structural defects influence the structural integrity of the sewer pipelines such as deformation, cracks, fractures, holes, etc. However, the operational defects are the ones that affect the flow of the sewer medium in the pipelines such as: roots, debris, attached deposits, infiltration, etc. Yet, the process for each defect to emerge follows a cause and effect relationship. Deformation, which is the change of the sewer pipeline geometry, is one type of an influencing defect that could be found in many sewer pipelines due to many surrounding factors. This defect could lead to collapse if the percentage exceeds 15%. Therefore, it is essential to predict the deformation percentage before confronting such a situation. Accordingly, this study will predict the percentage of the deformation defect in sewer pipelines adopting the multiple regression analysis. Several factors will be considered in establishing the model, which are expected to influence the defamation defect severity. Besides, this study will construct a time-based curve to understand how the defect would evolve overtime. Thus, this study is expected to be an asset for decision-makers as it will provide informative conclusions about the deformation defect severity. As a result, inspections will be minimized and so the budgets.

Keywords: deformation, prediction, regression analysis, sewer pipelines

Procedia PDF Downloads 180
13782 Simulation IDM for Schedule Generation of Slip-Form Operations

Authors: Hesham A. Khalek, Shafik S. Khoury, Remon F. Aziz, Mohamed A. Hakam

Abstract:

Slipforming operation’s linearity is a source of planning complications, and operation is usually subjected to bottlenecks at any point, so careful planning is required in order to achieve success. On the other hand, Discrete-event simulation concepts can be applied to simulate and analyze construction operations and to efficiently support construction scheduling. Nevertheless, preparation of input data for construction simulation is very challenging, time-consuming and human prone-error source. Therefore, to enhance the benefits of using DES in construction scheduling, this study proposes an integrated module to establish a framework for automating the generation of time schedules and decision support for Slipform construction projects, particularly through the project feasibility study phase by using data exchange between project data stored in an Intermediate database, DES and Scheduling software. Using the stored information, proposed system creates construction tasks attribute [e.g. activities durations, material quantities and resources amount], then DES uses all the given information to create a proposal for the construction schedule automatically. This research is considered a demonstration of a flexible Slipform project modeling, rapid scenario-based planning and schedule generation approach that may be of interest to both practitioners and researchers.

Keywords: discrete-event simulation, modeling, construction planning, data exchange, scheduling generation, EZstrobe

Procedia PDF Downloads 369
13781 Local Revenue Generation: Its Contribution to the Development of the Municipality of Bacolod, Lanao Del Norte

Authors: Louvill Manangan Ozarraga

Abstract:

this study was designed to ascertain the concept of revenue generation system of Bacolod, Lanao del Norte, through the completely enumerated elected officials and permanent employees sample respondents. The pertinent data were obtained through the use of structured questionnaire and with the help of key informants. The study utilized a cross-sectional survey design to analyze and interpret the data using frequency count, percentage distribution, and weighted mean. For the major findings, the local revenue generation of the Municipality has increased by Php 4,465,394.21 roughly 73.52% from years 2018 to 2020. Administrative activities help the Municipality cope up with development namely, issuance of ordinance, personnel augmentation and collection strategies. Moreover, respondents were undecided whether revenue generation contributed to infrastructures and purchases of assets. Majority of the respondents agreed that the municipality’s local revenue generation contributes to the social welfare of its constituents. Also, the respondents disagreed that locally generated revenue augments the 20% development fund. The study revealed that there is a big difference on the 2018 and 2020 Real Property Tax (RPT) collection. No committee was created to monitor and supervise the municipal revenue generation system. The Municipality, through partnership with TESDA, provides skilled-job opportunity to its constituents and participants.

Keywords: contribution, development, Bacolod Lanao del Norte, revenue generation system

Procedia PDF Downloads 72
13780 Evaluation of Hancornia speciosa Gomes Lyophilization at Different Stages of Maturation

Authors: D. C. Soares, J. T. S. Santos, D. G. Costa, A. K. S. Abud, T. P. Nunes, A. V. D. Figueiredo, A. M. de Oliveira Junior

Abstract:

Mangabeira (Hancornia speciosa Gomes), a native plant in Brazil, is found growing spontaneously in various regions of the country. The high perishability of tropical fruits such as mangaba, causes it to be necessary to use technologies that promote conservation, aiming to increase the shelf life of this fruit and add value. The objective of this study was to compare the mangabas lyophilisation curves behaviours with different sizes and maturation stages. The fruits were freeze-dried for a period of approximately 45 hours at lyophilizer Liotop brand, model L -108. It has been considered large the fruits between 38 and 58 mm diameter and small, between 23 and 28 mm diameter and the two states of maturation, intermediate and mature. Large size mangabas drying curves in both states of maturation were linear behaviour at all process, while the kinetic drying curves related to small fruits, independent of maturation state, had a typical behaviour of drying, with all the well-defined steps. With these results it was noted that the time of lyophilisation was suitable for small mangabas, a fact that did not happen with the larger one. This may indicate that the large mangabas require a longer time to freeze until reaches the equilibrium level, as it happens with the small fruits, going to have constant moisture at the end of the process. For both types of fruit were analysed water activity, acidity, protein, lipid, and vitamin C before and after the process.

Keywords: freeze dryer, mangaba, conservation, chemical characteristics

Procedia PDF Downloads 294
13779 Transient Level in the Surge Chamber at the Robert-bourassa Generating Station

Authors: Maryam Kamali Nezhad

Abstract:

The Robert-Bourassa development (LG-2), the first to be built on the Grande Rivière, comprises two sets of eight turbines- generator units each, the East and West powerhouses. Each powerhouse has two tailrace tunnels with an average length of about 1178 m. The LG-2A powerhouse houses 6 turbine-generator units. The water is discharged through two tailrace tunnels with a length of about 1330 m. The objective of this work, at RB (LG-2), is; 1) to establish a new maximum transient level in the surge chamber, 2) to define the new maximum equipment flow rate for the future turbine-generator units, 3) to ensure safe access to various intervention locations in the surge chamber. The transient levels under normal operating conditions at the RB plant were determined in 2001 by the Hydraulics Unit of HQE using the "Chamber" software. It is a one-dimensional mass oscillation calculation software; it is used to determine the variation of the water level in the equilibrium chamber located downstream of a power plant during the load shedding of the power plant units; it can also be used in the case of an equilibrium stack upstream of a power plant. The RB (LG-2) plant study is based on the theoretical nominal geometry of the chamber and the tailrace tunnels and the flow-level relationship at the outlet of the galleries established during design. The software is used in such a way that the results have an acceptable margin of safety, especially with respect to the maximum transient level (e.g., resumption of flow at an inopportune time), to take into account the turbulent and three-dimensional aspects of the actual flow in the chamber. Note that the transient levels depend on the water levels in the river and in the steady-state equilibrium chambers. These data are established in the HQP CRP database and updated from time to time. The maximum transient levels in the RB-East and RB-West powerhouses surge chamber were revised based on the latest update (set 4) of in-river rating curves and steady-state surge chamber water levels. The results of the revision were also used to update the technical advice on the operating conditions for the aforementioned surge chamber access while considering revisions to the calculated water levels.

Keywords: generating station, surge chamber, maximum transient level, hydroelectric power station, turbine-generator, reservoir

Procedia PDF Downloads 77
13778 Study on Acoustic Source Detection Performance Improvement of Microphone Array Installed on Drones Using Blind Source Separation

Authors: Youngsun Moon, Yeong-Ju Go, Jong-Soo Choi

Abstract:

Most drones that currently have surveillance/reconnaissance missions are basically equipped with optical equipment, but we also need to use a microphone array to estimate the location of the acoustic source. This can provide additional information in the absence of optical equipment. The purpose of this study is to estimate Direction of Arrival (DOA) based on Time Difference of Arrival (TDOA) estimation of the acoustic source in the drone. The problem is that it is impossible to measure the clear target acoustic source because of the drone noise. To overcome this problem is to separate the drone noise and the target acoustic source using Blind Source Separation(BSS) based on Independent Component Analysis(ICA). ICA can be performed assuming that the drone noise and target acoustic source are independent and each signal has non-gaussianity. For maximized non-gaussianity each signal, we use Negentropy and Kurtosis based on probability theory. As a result, we can improve TDOA estimation and DOA estimation of the target source in the noisy environment. We simulated the performance of the DOA algorithm applying BSS algorithm, and demonstrated the simulation through experiment at the anechoic wind tunnel.

Keywords: aeroacoustics, acoustic source detection, time difference of arrival, direction of arrival, blind source separation, independent component analysis, drone

Procedia PDF Downloads 153