Search results for: wireless channel modelling
565 Study of Morning-Glory Spillway Structure in Hydraulic Characteristics by CFD Model
Authors: Mostafa Zandi, Ramin Mansouri
Abstract:
Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. Morning-Glory spillway is one of the common spillways for discharging the overflow water behind dams, these kinds of spillways are constructed in dams with small reservoirs. In this research, the hydraulic flow characteristics of a morning-glory spillways are investigated with CFD model. Two dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k- and k-, were chosen to model Reynolds shear stress term. The power law scheme was used for discretization of momentum, k , and equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k -ε (Standard) has the most consistent results with experimental results. When the jet is getting closer to end of basin, the computational results increase with the numerical results of their differences. The lower profile of the water jet has less sensitivity to the hydraulic jet profile than the hydraulic jet profile. In the pressure test, it was also found that the results show that the numerical values of the pressure in the lower landing number differ greatly in experimental results. The characteristics of the complex flows over a Morning-Glory spillway were studied numerically using a RANS solver. Grid study showed that numerical results of a 57512-node grid had the best agreement with the experimental values. The desired downstream channel length was preferred to be 1.5 meter, and the standard k-ε turbulence model produced the best results in Morning-Glory spillway. The numerical free-surface profiles followed the theoretical equations very well.Keywords: morning-glory spillway, CFD model, hydraulic characteristics, wall function
Procedia PDF Downloads 77564 Modelling and Optimization of a Combined Sorption Enhanced Biomass Gasification with Hydrothermal Carbonization, Hot Gas Cleaning and Dielectric Barrier Discharge Plasma Reactor to Produce Pure H₂ and Methanol Synthesis
Authors: Vera Marcantonio, Marcello De Falco, Mauro Capocelli, Álvaro Amado-Fierro, Teresa A. Centeno, Enrico Bocci
Abstract:
Concerns about energy security, energy prices, and climate change led scientific research towards sustainable solutions to fossil fuel as renewable energy sources coupled with hydrogen as an energy vector and carbon capture and conversion technologies. Among the technologies investigated in the last decades, biomass gasification acquired great interest owing to the possibility of obtaining low-cost and CO₂ negative emission hydrogen production from a large variety of everywhere available organic wastes. Upstream and downstream treatment were then studied in order to maximize hydrogen yield, reduce the content of organic and inorganic contaminants under the admissible levels for the technologies which are coupled with, capture, and convert carbon dioxide. However, studies which analyse a whole process made of all those technologies are still missing. In order to fill this lack, the present paper investigated the coexistence of hydrothermal carbonization (HTC), sorption enhance gasification (SEG), hot gas cleaning (HGC), and CO₂ conversion by dielectric barrier discharge (DBD) plasma reactor for H₂ production from biomass waste by means of Aspen Plus software. The proposed model aimed to identify and optimise the performance of the plant by varying operating parameters (such as temperature, CaO/biomass ratio, separation efficiency, etc.). The carbon footprint of the global plant is 2.3 kg CO₂/kg H₂, lower than the latest limit value imposed by the European Commission to consider hydrogen as “clean”, that was set to 3 kg CO₂/kg H₂. The hydrogen yield referred to the whole plant is 250 gH₂/kgBIOMASS.Keywords: biomass gasification, hydrogen, aspen plus, sorption enhance gasification
Procedia PDF Downloads 79563 FMCW Doppler Radar Measurements with Microstrip Tx-Rx Antennas
Authors: Yusuf Ulaş Kabukçu, Si̇nan Çeli̇k, Onur Salan, Mai̇de Altuntaş, Mert Can Dalkiran, Gökseni̇n Bozdağ, Metehan Bulut, Fati̇h Yaman
Abstract:
This study presents a more compact implementation of the 2.4GHz MIT Coffee Can Doppler Radar for 2.6GHz operating frequency. The main difference of our prototype depends on the use of microstrip antennas which makes it possible to transport with a small robotic vehicle. We have designed our radar system with two different channels: Tx and Rx. The system mainly consists of Voltage Controlled Oscillator (VCO) source, low noise amplifiers, microstrip antennas, splitter, mixer, low pass filter, and necessary RF connectors with cables. The two microstrip antennas, one is element for transmitter and the other one is array for receiver channel, was designed, fabricated and verified by experiments. The system has two operation modes: speed detection and range detection. If the switch of the operation mode is ‘Off’, only CW signal transmitted for speed measurement. When the switch is ‘On’, CW is frequency-modulated and range detection is possible. In speed detection mode, high frequency (2.6 GHz) is generated by a VCO, and then amplified to reach a reasonable level of transmit power. Before transmitting the amplified signal through a microstrip patch antenna, a splitter used in order to compare the frequencies of transmitted and received signals. Half of amplified signal (LO) is forwarded to a mixer, which helps us to compare the frequencies of transmitted and received (RF) and has the IF output, or in other words information of Doppler frequency. Then, IF output is filtered and amplified to process the signal digitally. Filtered and amplified signal showing Doppler frequency is used as an input of audio input of a computer. After getting this data Doppler frequency is shown as a speed change on a figure via Matlab script. According to experimental field measurements the accuracy of speed measurement is approximately %90. In range detection mode, a chirp signal is used to form a FM chirp. This FM chirp helps to determine the range of the target since only Doppler frequency measured with CW is not enough for range detection. Such a FMCW Doppler radar may be used in border security of the countries since it is capable of both speed and range detection.Keywords: doppler radar, FMCW, range detection, speed detection
Procedia PDF Downloads 398562 Finite Element Modelling for the Development of a Planar Ultrasonic Dental Scaler for Prophylactic and Periodontal Care
Authors: Martin Hofmann, Diego Stutzer, Thomas Niederhauser, Juergen Burger
Abstract:
Dental biofilm is the main etiologic factor for caries, periodontal and peri-implant infections. In addition to the risk of tooth loss, periodontitis is also associated with an increased risk of systemic diseases such as atherosclerotic cardiovascular disease and diabetes. For this reason, dental hygienists use ultrasonic scalers for prophylactic and periodontal care of the teeth. However, the current instruments are limited to their dimensions and operating frequencies. The innovative design of a planar ultrasonic transducer introduces a new type of dental scalers. The flat titanium-based design allows the mass to be significantly reduced compared to a conventional screw-mounted Langevin transducer, resulting in a more efficient and controllable scaler. For the development of the novel device, multi-physics finite element analysis was used to simulate and optimise various design concepts. This process was supported by prototyping and electromechanical characterisation. The feasibility and potential of a planar ultrasonic transducer have already been confirmed by our current prototypes, which achieve higher performance compared to commercial devices. Operating at the desired resonance frequency of 28 kHz with a driving voltage of 40 Vrms results in an in-plane tip oscillation with a displacement amplitude of up to 75 μm by having less than 8 % out-of-plane movement and an energy transformation factor of 1.07 μm/mA. In a further step, we will adapt the design to two additional resonance frequencies (20 and 40 kHz) to obtain information about the most suitable mode of operation. In addition to the already integrated characterization methods, we will evaluate the clinical efficiency of the different devices in an in vitro setup with an artificial biofilm pocket model.Keywords: ultrasonic instrumentation, ultrasonic scaling, piezoelectric transducer, finite element simulation, dental biofilm, dental calculus
Procedia PDF Downloads 123561 Drug and Poison Information Centers: An Emergent Need of Health Care Professionals in Pakistan
Authors: Asif Khaliq, Sayeeda A. Sayed
Abstract:
The drug information centers provide drug related information to the requesters that include physicians, pharmacist, nurses and other allied health care professionals. The International Pharmacist Federation (FIP) describes basic functions of a drug and poison information centers as drug evaluation, therapeutic counseling, pharmaceutical advice, research, pharmaco-vigilence and toxicology. Continuous advancement in the field of medicine has expanded the medical literature, which has increased demand of a drug and poison information center for the guidance, support and facilitation of physicians. The objective of the study is to determine the need of drug and poison information centers in public and private hospitals of Karachi, Pakistan. A cross sectional study was conducted during July 2013 to April 2014 using a self-administered, multi-itemed questionnaire. Non Probability Convenient sampling was used to select the study participants. A total of 307 physicians from public and private hospitals of Karachi participated in the study. The need for 24/7 Drug and poison information center was highlighted by 92 % of physicians and 67% physicians suggested opening a drug information center at the hospital. It was reported that 70% physicians take at least 15 minutes for searching the information about the drug while managing a case. Regarding the poisoning case management, 52% physicians complaint about the unavailability of medicines in hospitals; and mentioned the importance of medicines for safe and timely management of patients. Although 73% physicians attended continued medical education (CME) sessions, 92 % physicians insisted on the need of 24/7 Drug and poison information center. The scarcity of organized channel for obtaining the information about drug and poisons is one of the most crucial problems for healthcare workers in Pakistan. The drug and poison information center is an advisory body that assists health care professional and patients in provision of appropriate drug and hazardous substance information. Drug and poison information center is one of the integral needs for running an effective health care system. Provision of a 24 /7 drug information centers with specialized staff offer multiple benefits to the hospitals while reducing treatment delays, addressing awareness gaps of all stakeholders and ensuring provision of quality health care.Keywords: drug and poison information centers, Pakistan, physicians, public and private hospitals
Procedia PDF Downloads 328560 Electronic Six-Minute Walk Test (E-6MWT): Less Manpower, Higher Efficiency, and Better Data Management
Authors: C. M. Choi, H. C. Tsang, W. K. Fong, Y. K. Cheng, T. K. Chui, L. Y. Chan, K. W. Lee, C. K. Yuen, P. W. Lau, Y. L. To, K. C. Chow
Abstract:
Six-minute walk test (6MWT) is a sub-maximal exercise test to assess aerobic capacity and exercise tolerance of patients with chronic respiratory disease and heart failure. This has been proven to be a reliable and valid tool and commonly used in clinical situations. Traditional 6MWT is labour-intensive and time-consuming especially for patients who require assistance in ambulation and oxygen use. When performing the test with these patients, one staff will assist the patient in walking (with or without aids) while another staff will need to manually record patient’s oxygen saturation, heart rate and walking distance at every minute and/or carry oxygen cylinder at the same time. Physiotherapist will then have to document the test results in bed notes in details. With the use of electronic 6MWT (E-6MWT), patients wear a wireless oximeter that transfers data to a tablet PC via Bluetooth. Real-time recording of oxygen saturation, heart rate, and distance are displayed. No manual work on recording is needed. The tablet will generate a comprehensive report which can be directly attached to the patient’s bed notes for documentation. Data can also be saved for later patient follow up. This study was carried out in North District Hospital. Patients who followed commands and required 6MWT assessment were included. Patients were assigned to study or control groups. In the study group, patients adopted the E-6MWT while those in control group adopted the traditional 6MWT. Manpower and time consumed were recorded. Physiotherapists also completed a questionnaire about the use of E-6MWT. Total 12 subjects (Study=6; Control=6) were recruited during 11-12/2017. An average number of staff required and time consumed in traditional 6MWT were 1.67 and 949.33 seconds respectively; while in E-6MWT, the figures were 1.00 and 630.00 seconds respectively. Compared to traditional 6MWT, E-6MWT required 67.00% less manpower and 50.10% less in time spent. Physiotherapists (n=7) found E-6MWT is convenient to use (mean=5.14; satisfied to very satisfied), requires less manpower and time to complete the test (mean=4.71; rather satisfied to satisfied), has better data management (mean=5.86; satisfied to very satisfied) and is recommended to be used clinically (mean=5.29; satisfied to very satisfied). It is proven that E-6MWT requires less manpower input with higher efficiency and better data management. It is welcomed by the clinical frontline staff.Keywords: electronic, physiotherapy, six-minute walk test, 6MWT
Procedia PDF Downloads 154559 Gc-ms Data Integrated Chemometrics for the Authentication of Vegetable Oil Brands in Minna, Niger State, Nigeria
Authors: Rasaq Bolakale Salau, Maimuna Muhammad Abubakar, Jonathan Yisa, Muhammad Tauheed Bisiriyu, Jimoh Oladejo Tijani, Alexander Ifeanyi Ajai
Abstract:
Vegetables oils are widely consumed in Nigeria. This has led to competitive manufacture of various oil brands. This leads increasing tendencies for fraud, labelling misinformation and other unwholesome practices. A total of thirty samples including raw and corresponding branded samples of vegetable oils were collected. The Oils were extracted from raw ground nut, soya bean and oil palm fruits. The GC-MS data was subjected to chemometric techniques of PCA and HCA. The SOLO 8.7 version of the standalone chemometrics software developed by Eigenvector research incorporated and powered by PLS Toolbox was used. The GCMS fingerprint gave basis for discrimination as it reveals four predominant but unevenly distributed fatty acids: Hexadecanoic acid methyl ester (10.27- 45.21% PA), 9,12-octadecadienoic acid methyl ester (10.9 - 45.94% PA), 9-octadecenoic acid methyl ester (18.75 - 45.65%PA), and Eicosanoic acid methyl ester (1.19% - 6.29%PA). In PCA modelling, two PCs are retained at cumulative variance captured at 73.15%. The score plots indicated that palm oil brands are most aligned with raw palm oil. PCA loading plot reveals the signature retention times between 4.0 and 6.0 needed for quality assurance and authentication of the oils samples. They are of aromatic hydrocarbons, alcohols and aldehydes functional groups. HCA dendrogram which was modeled using Euclidian distance through Wards method, indicated co-equivalent samples. HCA revealed the pair of raw palm oil brand and palm oil brand in the closest neighbourhood (± 1.62 % A difference) based on variance weighted distance. It showed Palm olein brand to be most authentic. In conclusion, based on the GCMS data with chemometrics, the authenticity of the branded samples is ranked as: Palm oil > Soya oil > groundnut oil.Keywords: vegetable oil, authenticity, chemometrics, PCA, HCA, GC-MS
Procedia PDF Downloads 31558 Optimization and Kinetic Analysis of the Enzymatic Hydrolysis of Oil Palm Empty Fruit Bunch To Xylose Using Crude Xylanase from Trichoderma Viride ITB CC L.67
Authors: Efri Mardawati, Ronny Purwadi, Made Tri Ari Penia Kresnowati, Tjandra Setiadi
Abstract:
EFB are mainly composed of cellulose (≈ 43%), hemicellulose (≈ 23%) and lignin (≈20%). The palm oil empty fruit bunches (EFB) is the lignosellulosic waste from crude palm oil industries mainly compose of (≈ 43%), hemicellulose (≈ 23%) and lignin (≈20%). Xylan, a polymer made of pentose sugar xylose and the most abundant component of hemicellulose in plant cell wall. Further xylose can be used as a raw material for production of a wide variety of chemicals such as xylitol, which is extensively used in food, pharmaceutical and thin coating applications. Currently, xylose is mostly produced from xylan via chemical hydrolysis processes. However, these processes are normally conducted at a high temperature and pressure, which is costly, and the required downstream processes are relatively complex. As an alternative method, enzymatic hydrolysis of xylan to xylose offers an environmentally friendly biotechnological process, which is performed at ambient temperature and pressure with high specificity and at low cost. This process is catalysed by xylanolytic enzymes that can be produced by some fungal species such as Aspergillus niger, Penicillium crysogenum, Tricoderma reseei, etc. Fungal that will be used to produce crude xylanase enzyme in this study is T. Viride ITB CC L.67. It is the purposes of this research to study the influence of pretreatment of EFB for the enzymatic hydrolysis process, optimation of temperature and pH of the hydrolysis process, the influence of substrate and enzyme concentration to the enzymatic hydrolysis process, the dynamics of hydrolysis process and followingly to study the kinetics of this process. Xylose as the product of enzymatic hydrolysis process analyzed by HPLC. The results show that the thermal pretreatment of EFB enhance the enzymatic hydrolysis process. The enzymatic hydrolysis can be well approached by the Michaelis Menten kinetic model, and kinetic parameters are obtained from experimental data.Keywords: oil palm empty fruit bunches (EFB), xylose, enzymatic hydrolysis, kinetic modelling
Procedia PDF Downloads 389557 Finite Element Modeling of Aortic Intramural Haematoma Shows Size Matters
Authors: Aihong Zhao, Priya Sastry, Mark L Field, Mohamad Bashir, Arvind Singh, David Richens
Abstract:
Objectives: Intramural haematoma (IMH) is one of the pathologies, along with acute aortic dissection, that present as Acute Aortic Syndrome (AAS). Evidence suggests that unlike aortic dissection, some intramural haematomas may regress with medical management. However, intramural haematomas have been traditionally managed like acute aortic dissections. Given that some of these pathologies may regress with conservative management, it would be useful to be able to identify which of these may not need high risk emergency intervention. A computational aortic model was used in this study to try and identify intramural haematomas with risk of progression to aortic dissection. Methods: We created a computational model of the aorta with luminal blood flow. Reports in the literature have identified 11 mm as the radial clot thickness that is associated with heightened risk of progression of intramural haematoma. Accordingly, haematomas of varying sizes were implanted in the modeled aortic wall to test this hypothesis. The model was exposed to physiological blood flows and the stresses and strains in each layer of the aortic wall were recorded. Results: Size and shape of clot were seen to affect the magnitude of aortic stresses. The greatest stresses and strains were recorded in the intima of the model. When the haematoma exceeded 10 mm in all dimensions, the stress on the intima reached breaking point. Conclusion: Intramural clot size appears to be a contributory factor affecting aortic wall stress. Our computer simulation corroborates clinical evidence in the literature proposing that IMH diameter greater than 11 mm may be predictive of progression. This preliminary report suggests finite element modelling of the aortic wall may be a useful process by which to examine putative variables important in predicting progression or regression of intramural haematoma.Keywords: intramural haematoma, acute aortic syndrome, finite element analysis,
Procedia PDF Downloads 431556 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models
Authors: Benbiao Song, Yan Gao, Zhuo Liu
Abstract:
Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram
Procedia PDF Downloads 264555 Partnering with Stakeholders to Secure Digitization of Water
Authors: Sindhu Govardhan, Kenneth G. Crowther
Abstract:
Modernisation of the water sector is leading to increased connectivity and integration of emerging technologies with traditional ones, leading to new security risks. The convergence of Information Technology (IT) with Operation Technology (OT) results in solutions that are spread across larger geographic areas, increasingly consist of interconnected Industrial Internet of Things (IIOT) devices and software, rely on the integration of legacy with modern technologies, use of complex supply chain components leading to complex architectures and communication paths. The result is that multiple parties collectively own and operate these emergent technologies, threat actors find new paths to exploit, and traditional cybersecurity controls are inadequate. Our approach is to explicitly identify and draw data flows that cross trust boundaries between owners and operators of various aspects of these emerging and interconnected technologies. On these data flows, we layer potential attack vectors to create a frame of reference for evaluating possible risks against connected technologies. Finally, we identify where existing controls, mitigations, and other remediations exist across industry partners (e.g., suppliers, product vendors, integrators, water utilities, and regulators). From these, we are able to understand potential gaps in security, the roles in the supply chain that are most likely to effectively remediate those security gaps, and test cases to evaluate and strengthen security across these partners. This informs a “shared responsibility” solution that recognises that security is multi-layered and requires collaboration to be successful. This shared responsibility security framework improves visibility, understanding, and control across the entire supply chain, and particularly for those water utilities that are accountable for safe and continuous operations.Keywords: cyber security, shared responsibility, IIOT, threat modelling
Procedia PDF Downloads 77554 Analysis of Socio-Economics of Tuna Fisheries Management (Thunnus Albacares Marcellus Decapterus) in Makassar Waters Strait and Its Effect on Human Health and Policy Implications in Central Sulawesi-Indonesia
Authors: Siti Rahmawati
Abstract:
Indonesia has had long period of monetary economic crisis and it is followed by an upward trend in the price of fuel oil. This situation impacts all aspects of tuna fishermen community. For instance, the basic needs of fishing communities increase and the lower purchasing power then lead to economic and social instability as well as the health of fishermen household. To understand this AHP method is applied to acknowledge the model of tuna fisheries management priorities and cold chain marketing channel and the utilization levels that impact on human health. The study is designed as a development research with the number of 180 respondents. The data were analyzed by Analytical Hierarchy Process (AHP) method. The development of tuna fishery business can improve productivity of production with economic empowerment activities for coastal communities, improving the competitiveness of products, developing fish processing centers and provide internal capital for the development of optimal fishery business. From economic aspects, fishery business is more attracting because the benefit cost ratio of 2.86. This means that for 10 years, the economic life of this project can work well as B/C> 1 and therefore the rate of investment is economically viable. From the health aspects, tuna can reduce the risk of dying from heart disease by 50%, because tuna contain selenium in the human body. The consumption of 100 g of tuna meet 52.9% of the selenium in the body and activating the antioxidant enzyme glutathione peroxidaxe which can protect the body from free radicals and stimulate various cancers. The results of the analytic hierarchy process that the quality of tuna products is the top priority for export quality as well as quality control in order to compete in the global market. The implementation of the policy can increase the income of fishermen and reduce the poverty of fishermen households and have impact on the human health whose has high risk of disease.Keywords: management of tuna, social, economic, health
Procedia PDF Downloads 316553 Earth Observations and Hydrodynamic Modeling to Monitor and Simulate the Oil Pollution in the Gulf of Suez, Red Sea, Egypt
Authors: Islam Abou El-Magd, Elham Ali, Moahmed Zakzouk, Nesreen Khairy, Naglaa Zanaty
Abstract:
Maine environment and coastal zone are wealthy with natural resources that contribute to the local economy of Egypt. The Gulf of Suez and Red Sea area accommodates diverse human activities that contribute to the local economy, including oil exploration and production, touristic activities, export and import harbors, etc, however, it is always under the threat of pollution due to human interaction and activities. This research aimed at integrating in-situ measurements and remotely sensed data with hydrodynamic model to map and simulate the oil pollution. High-resolution satellite sensors including Sentinel 2 and Plantlab were functioned to trace the oil pollution. Spectral band ratio of band 4 (infrared) over band 3 (red) underpinned the mapping of the point source pollution from the oil industrial estates. This ratio is supporting the absorption windows detected in the hyperspectral profiles. ASD in-situ hyperspectral device was used to measure experimentally the oil pollution in the marine environment. The experiment used to measure water behavior in three cases a) clear water without oil, b) water covered with raw oil, and c) water after a while from throwing the raw oil. The spectral curve is clearly identified absorption windows for oil pollution, particularly at 600-700nm. MIKE 21 model was applied to simulate the dispersion of the oil contamination and create scenarios for crises management. The model requires precise data preparation of the bathymetry, tides, waves, atmospheric parameters, which partially obtained from online modeled data and other from historical in-situ stations. The simulation enabled to project the movement of the oil spill and could create a warning system for mitigation. Details of the research results will be described in the paper.Keywords: oil pollution, remote sensing, modelling, Red Sea, Egypt
Procedia PDF Downloads 347552 Characterising the Dynamic Friction in the Staking of Plain Spherical Bearings
Authors: Jacob Hatherell, Jason Matthews, Arnaud Marmier
Abstract:
Anvil Staking is a cold-forming process that is used in the assembly of plain spherical bearings into a rod-end housing. This process ensures that the bearing outer lip conforms to the chamfer in the matching rod end to produce a lightweight mechanical joint with sufficient strength to meet the pushout load requirement of the assembly. Finite Element (FE) analysis is being used extensively to predict the behaviour of metal flow in cold forming processes to support industrial manufacturing and product development. On-going research aims to validate FE models across a wide range of bearing and rod-end geometries by systematically isolating and understanding the uncertainties caused by variations in, material properties, load-dependent friction coefficients and strain rate sensitivity. The improved confidence in these models aims to eliminate the costly and time-consuming process of experimental trials in the introduction of new bearing designs. Previous literature has shown that friction coefficients do not remain constant during cold forming operations, however, the understanding of this phenomenon varies significantly and is rarely implemented in FE models. In this paper, a new approach to evaluate the normal contact pressure versus friction coefficient relationship is outlined using friction calibration charts generated via iterative FE models and ring compression tests. When compared to previous research, this new approach greatly improves the prediction of forming geometry and the forming load during the staking operation. This paper also aims to standardise the FE approach to modelling ring compression test and determining the friction calibration charts.Keywords: anvil staking, finite element analysis, friction coefficient, spherical plain bearing, ring compression tests
Procedia PDF Downloads 205551 Changes in Textural Properties of Zucchini Slices Under Effects of Partial Predrying and Deep-Fat-Frying
Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner
Abstract:
Changes in textural properties of any food material during processing is significant for further consumer’s evaluation and directly affects their decisions. Thus any food material should be considered in terms of textural properties after any process. In the present study zucchini slices were partially predried to control and reduce the product’s final oil content. A conventional oven was used for partially dehydration of zucchini slices. Following frying was carried in an industrial fryer having temperature controller. This study was based on the effect of this predrying process on textural properties of fried zucchini slices. Texture profile analysis was performed. Hardness, elasticity, chewiness, cohesiveness were studied texture parameters of fried zucchini slices. Temperature and weight loss were monitored parameters of predrying process, whereas, in frying, oil temperature and process time were controlled. Optimization of two successive processes was done by response surface methodology being one of the common used statistical process optimization tools. Models developed for each texture parameters displayed high success to predict their values as a function of studied processes’ conditions. Process optimization was performed according to target values for each property determined for directly fried zucchini slices taking the highest score from sensory evaluation. Results indicated that textural properties of predried and then fried zucchini slices could be controlled by well-established equations. This is thought to be significant for fried stuff related food industry, where controlling of sensorial properties are crucial to lead consumer’s perception and texture related ones are leaders. This project (113R015) has been supported by TUBITAK.Keywords: optimization, response surface methodology, texture profile analysis, conventional oven, modelling
Procedia PDF Downloads 433550 Comparison of EMG Normalization Techniques Recommended for Back Muscles Used in Ergonomics Research
Authors: Saif Al-Qaisi, Alif Saba
Abstract:
Normalization of electromyography (EMG) data in ergonomics research is a prerequisite for interpreting the data. Normalizing accounts for variability in the data due to differences in participants’ physical characteristics, electrode placement protocols, time of day, and other nuisance factors. Typically, normalized data is reported as a percentage of the muscle’s isometric maximum voluntary contraction (%MVC). Various MVC techniques have been recommended in the literature for normalizing EMG activity of back muscles. This research tests and compares the recommended MVC techniques in the literature for three back muscles commonly used in ergonomics research, which are the lumbar erector spinae (LES), latissimus dorsi (LD), and thoracic erector spinae (TES). Six healthy males from a university population participated in this research. Five different MVC exercises were compared for each muscle using the Tringo wireless EMG system (Delsys Inc.). Since the LES and TES share similar functions in controlling trunk movements, their MVC exercises were the same, which included trunk extension at -60°, trunk extension at 0°, trunk extension while standing, hip extension, and the arch test. The MVC exercises identified in the literature for the LD were chest-supported shoulder extension, prone shoulder extension, lat-pull down, internal shoulder rotation, and abducted shoulder flexion. The maximum EMG signal was recorded during each MVC trial, and then the averages were computed across participants. A one-way analysis of variance (ANOVA) was utilized to determine the effect of MVC technique on muscle activity. Post-hoc analyses were performed using the Tukey test. The MVC technique effect was statistically significant for each of the muscles (p < 0.05); however, a larger sample of participants was needed to detect significant differences in the Tukey tests. The arch test was associated with the highest EMG average at the LES, and also it resulted in the maximum EMG activity more often than the other techniques (three out of six participants). For the TES, trunk extension at 0° was associated with the largest EMG average, and it resulted in the maximum EMG activity the most often (three out of six participants). For the LD, participants obtained their maximum EMG either from chest-supported shoulder extension (three out of six participants) or prone shoulder extension (three out of six participants). Chest-supported shoulder extension, however, had a larger average than prone shoulder extension (0.263 and 0.240, respectively). Although all the aforementioned techniques were superior in their averages, they did not always result in the maximum EMG activity. If an accurate estimate of the true MVC is desired, more than one technique may have to be performed. This research provides additional MVC techniques for each muscle that may elicit the maximum EMG activity.Keywords: electromyography, maximum voluntary contraction, normalization, physical ergonomics
Procedia PDF Downloads 193549 Extending Theory of Planned Behavior to Modelling Chronic Patients’ Acceptance of Health Information: An Information Overload Perspective
Authors: Shu-Lien Chou, Chung-Feng Liu
Abstract:
Self-health management of chronic illnesses plays an important part in chronic illness treatments. However, various kinds of health information (health education materials) which government or healthcare institutions provide for patients may not achieve the expected outcome. One of the critical reasons affecting patients’ use intention could be patients’ perceived Information overload regarding the health information. This study proposed an extended model of Theory of Planned Behavior, which integrating perceived information overload as another construct to explore patients’ use intention of the health information for self-health management. The independent variables are attitude, subject norm, perceived behavior control and perceived information overload while the dependent variable is behavior intention to use the health information. The cross-sectional study used a structured questionnaire for data collection, focusing on the chronic patients with coronary artery disease (CAD), who are the potential users of the health information, in a medical center in Taiwan. Data were analyzed using descriptive statistics of the basic information distribution of the questionnaire respondents, and the Partial Least Squares (PLS) structural equation model to study the reliability and construct validity for testing our hypotheses. A total of 110 patients were enrolled in this study and 106 valid questionnaires were collected. The PLS analysis result indicates that the patients’ perceived information overload of health information contributes the most critical factor influencing the behavioral intention. Subjective norm and perceived behavioral control of TPB constructs had significant effects on patients’ intentions to use health information also, whereas the attitude construct did not. This study demonstrated a comprehensive framework, which extending perceived information overload into TPB model to predict patients’ behavioral intention of using heath information. We expect that the results of this study will provide useful insights for studying health information from the perspectives of academia, governments, and healthcare providers.Keywords: chronic patients, health information, information overload, theory of planned behavior
Procedia PDF Downloads 436548 Optimized Cropping Calendar and Land Suitability for Maize through GIS and Crop Modelling
Authors: Marilyn S. Painagan, Willie Jones B. Saliling
Abstract:
This paper reports an optimized cropping calendar and land suitability for maize in North Cotabato derived from modeling crop productivity over time and space. Using Quantum GIS, eight representative soil types and 0.3o x 0.3o climate grids shapefiles were intersected to form thirty two pedoclimatic zones within the boundaries of the province. Surveys were done to ascertain crop performance and phenological properties on field. Based on these surveys, crop parameters were calibrated specific for a variety of maize. Soil properties and climatic data (daily precipitation, maximum and minimum temperatures) from pedoclimatic zones were loaded to the FAO Aquacrop Water Productivity Model along with the crop properties from field surveys to simulate yield from 1980 to 2010. The average yield per month was computed to come up with the month of planting having the highest and lowest probable yield in a year assuming that all lands were planted with maize. The yield attributes were visualized in the Quantum GIS environment. The study revealed that optimal cropping patterns varied across North Cotabato. Highest probable yield (8000 kg/ha) can be obtained when maize is planted on May and September (sandy clay-loam soils) in the northern part of the province while the lowest probable yield (1000 kg/ha) can be obtained when maize is planted on January, February and March (clay loam soils) at the northern part of the province. Yields are simulated on the basis of varieties currently planted by farmers of North Cotabato. The resulting maps suggest where and when maize is most suitable to achieve high yields. There is a need to ground truth and validate the cropping calendar on field.Keywords: aquacrop, quantum GIS, maize, cropping calendar, water productivity
Procedia PDF Downloads 256547 Role of Spatial Variability in the Service Life Prediction of Reinforced Concrete Bridges Affected by Corrosion
Authors: Omran M. Kenshel, Alan J. O'Connor
Abstract:
Estimating the service life of Reinforced Concrete (RC) bridge structures located in corrosive marine environments of a great importance to their owners/engineers. Traditionally, bridge owners/engineers relied more on subjective engineering judgment, e.g. visual inspection, in their estimation approach. However, because financial resources are often limited, rational calculation methods of estimation are needed to aid in making reliable and more accurate predictions for the service life of RC structures. This is in order to direct funds to bridges found to be the most critical. Criticality of the structure can be considered either form the Structural Capacity (i.e. Ultimate Limit State) or from Serviceability viewpoint whichever is adopted. This paper considers the service life of the structure only from the Structural Capacity viewpoint. Considering the great variability associated with the parameters involved in the estimation process, the probabilistic approach is most suited. The probabilistic modelling adopted here used Monte Carlo simulation technique to estimate the Reliability (i.e. Probability of Failure) of the structure under consideration. In this paper the authors used their own experimental data for the Correlation Length (CL) for the most important deterioration parameters. The CL is a parameter of the Correlation Function (CF) by which the spatial fluctuation of a certain deterioration parameter is described. The CL data used here were produced by analyzing 45 chloride profiles obtained from a 30 years old RC bridge located in a marine environment. The service life of the structure were predicted in terms of the load carrying capacity of an RC bridge beam girder. The analysis showed that the influence of SV is only evident if the reliability of the structure is governed by the Flexure failure rather than by the Shear failure.Keywords: Chloride-induced corrosion, Monte-Carlo simulation, reinforced concrete, spatial variability
Procedia PDF Downloads 473546 Calculating Asphaltenes Precipitation Onset Pressure by Using Cardanol as Precipitation Inhibitor: A Strategy to Increment the Oil Well Production
Authors: Camilo A. Guerrero-Martin, Erik Montes Paez, Marcia C. K. Oliveira, Jonathan Campos, Elizabete F. Lucas
Abstract:
Asphaltenes precipitation is considered as a formation damage problem, which can reduce the oil recovery factor. It fouls piping and surface installations, as well as cause serious flow assurance complications and decline oil well production. Therefore, researchers have shown an interest in chemical treatments to control this phenomenon. The aim of this paper is to assess the asphaltenes precipitation onset of crude oils in the presence of cardanol, by titrating the crude with n-heptane. Moreover, based on this results obtained at atmosphere pressure, the asphaltenes precipitation onset pressure were calculated to predict asphaltenes precipitation in the reservoir, by using differential liberation and refractive index data of the oils. The influence of cardanol concentrations in the asphaltenes stabilization of three Brazilian crude oils samples (with similar API densities) was studied. Therefore, four formulations of cardanol in toluene were prepared: 0, 3, 5, 10 and 15 m/m%. The formulations were added to the crude at 2:98 ratio. The petroleum samples were characterized by API density, elemental analysis and differential liberation test. The asphaltenes precipitation onset (APO) was determined by titrating with n-heptane and monitoring with near-infrared (NIR). UV-Vis spectroscopy experiments were also done to assess the precipitate asphaltenes content. The asphaltenes precipitation envelopes (APE) were also determined by numerical simulation (Multiflash). In addition, the adequate artificial lift systems (ALS) for the oils were selected. It was based on the downhole well profile and a screening methodology. Finally, the oil flowrates were modelling by NODAL analysis production system in the PIPESIM software. The results of this study show that the asphaltenes precipitation onset of the crude oils were 2.2, 2.3 and 6.0 mL of n-heptane/g of oil. The cardanol was an effective inhibitor of asphaltenes precipitation for the crude oils used in this study, since it displaces the precipitation pressure of the oil to lower values. This indicates that cardanol can increase the oil wells productivity.Keywords: asphaltenes, NODAL analysis production system, precipitation pressure onset, inhibitory molecule
Procedia PDF Downloads 176545 Characterization of Transmembrane Proteins with Five Alpha-Helical Regions
Authors: Misty Attwood, Helgi Schioth
Abstract:
Transmembrane proteins are important components in many essential cell processes such as signal transduction, cell-cell signalling, transport of solutes, structural adhesion activities, and protein trafficking. Due to their involvement in diverse critical activities, transmembrane proteins are implicated in different disease pathways and hence are the focus of intense interest in understanding functional activities, their pathogenesis in disease, and their potential as pharmaceutical targets. Further, as the structure and function of proteins are correlated, investigating a group of proteins with the same tertiary structure, i.e., the same number of transmembrane regions, may give understanding about their functional roles and potential as therapeutic targets. In this in silico bioinformatics analysis, we identify and comprehensively characterize the previously unstudied group of proteins with five transmembrane-spanning regions (5TM). We classify nearly 60 5TM proteins in which 31 are members of ten families that contain two or more family members and all members are predicted to contain the 5TM architecture. Furthermore, nine singlet proteins that contain the 5TM architecture without paralogues detected in humans were also identifying, indicating the evolution of single unique proteins with the 5TM structure. Interestingly, more than half of these proteins function in localization activities through movement or tethering of cell components and more than one-third are involved in transport activities, particularly in the mitochondria. Surprisingly, no receptor activity was identified within this family in sharp contrast with other TM families. Three major 5TM families were identified and include the Tweety family, which are pore-forming subunits of the swelling-dependent volume regulated anion channel in astrocytes; the sidoreflexin family that acts as mitochondrial amino acid transporters; and the Yip1 domain family engaged in vesicle budding and intra-Golgi transport. About 30% of the proteins have enhanced expression in the brain, liver, or testis. Importantly, 60% of these proteins are identified as cancer prognostic markers, where they are associated with clinical outcomes of various tumour types, indicating further investigation into the function and expression of these proteins is important. This study provides the first comprehensive analysis of proteins with 5TM regions and provides details of the unique characteristics and application in pharmaceutical development.Keywords: 5TM, cancer prognostic marker, drug targets, transmembrane protein
Procedia PDF Downloads 111544 Validation of the Formula for Air Attenuation Coefficient for Acoustic Scale Models
Authors: Katarzyna Baruch, Agata Szelag, Aleksandra Majchrzak, Tadeusz Kamisinski
Abstract:
Methodology of measurement of sound absorption coefficient in scaled models is based on the ISO 354 standard. The measurement is realised indirectly - the coefficient is calculated from the reverberation time of an empty chamber as well as a chamber with an inserted sample. It is crucial to maintain the atmospheric conditions stable during both measurements. Possible differences may be amended basing on the formulas for atmospheric attenuation coefficient α given in ISO 9613-1. Model studies require scaling particular factors in compliance with specified characteristic numbers. For absorption coefficient measurement, these are for example: frequency range or the value of attenuation coefficient m. Thanks to the possibilities of modern electroacoustic transducers, it is no longer a problem to scale the frequencies which have to be proportionally higher. However, it may be problematic to reduce values of the attenuation coefficient. It is practically obtained by drying the air down to a defined relative humidity. Despite the change of frequency range and relative humidity of the air, ISO 9613-1 standard still allows the calculation of the amendment for little differences of the atmospheric conditions in the chamber during measurements. The paper discusses a number of theoretical analyses and experimental measurements performed in order to obtain consistency between the values of attenuation coefficient calculated from the formulas given in the standard and by measurement. The authors performed measurements of reverberation time in a chamber made in a 1/8 scale in a corresponding frequency range, i.e. 800 Hz - 40 kHz and in different values of the relative air humidity (40% 5%). Based on the measurements, empirical values of attenuation coefficient were calculated and compared with theoretical ones. In general, the values correspond with each other, but for high frequencies and low values of relative air humidity the differences are significant. Those discrepancies may directly influence the values of measured sound absorption coefficient and cause errors. Therefore, the authors made an effort to determine an amendment minimizing described inaccuracy.Keywords: air absorption correction, attenuation coefficient, dimensional analysis, model study, scaled modelling
Procedia PDF Downloads 421543 Advancing Customer Service Management Platform: Case Study of Social Media Applications
Authors: Iseoluwa Bukunmi Kolawole, Omowunmi Precious Isreal
Abstract:
Social media has completely revolutionized the ways communication used to take place even a decade ago. It makes use of computer mediated technologies which helps in the creation of information and sharing. Social media may be defined as the production, consumption and exchange of information across platforms for social interaction. The social media has become a forum in which customer’s look for information about companies to do business with and request answers to questions about their products and services. Customer service may be termed as a process of ensuring customer’s satisfaction by meeting and exceeding their wants. In delivering excellent customer service, knowing customer’s expectations and where they are reaching out is important in meeting and exceeding customer’s want. Facebook is one of the most used social media platforms among others which also include Twitter, Instagram, Whatsapp and LinkedIn. This indicates customers are spending more time on social media platforms, therefore calls for improvement in customer service delivery over the social media pages. Millions of people channel their issues, complaints, complements and inquiries through social media. This study have being able to identify what social media customers want, their expectations and how they want to be responded to by brands and companies. However, the applied research methodology used in this paper was a mixed methods approach. The authors of d paper used qualitative method such as gathering critical views of experts on social media and customer relationship management to analyse the impacts of social media on customer's satisfaction through interviews. The authors also used quantitative such as online survey methods to address issues at different stages and to have insight about different aspects of the platforms i.e. customer’s and company’s perception about the effects of social media. Thereby exploring and gaining better understanding of how brands make use of social media as a customer relationship management tool. And an exploratory research approach strategy was applied analysing how companies need to create good customer support using social media in order to improve good customer service delivery, customer retention and referrals. Therefore many companies have preferred social media platform application as a medium of handling customer’s queries and ensuring their satisfaction, this is because social media tools are considered more transparent and effective in its operations when dealing with customer relationship management.Keywords: brands, customer service, information, social media
Procedia PDF Downloads 268542 Recreation and Environmental Quality of Tropical Wetlands: A Social Media Based Spatial Analysis
Authors: Michael Sinclair, Andrea Ghermandi, Sheela A. Moses, Joseph Sabu
Abstract:
Passively crowdsourced data, such as geotagged photographs from social media, represent an opportunistic source of location-based and time-specific behavioral data for ecosystem services analysis. Such data have innovative applications for environmental management and protection, which are replicable at wide spatial scales and in the context of both developed and developing countries. Here we test one such innovation, based on the analysis of the metadata of online geotagged photographs, to investigate the provision of recreational services by the entire network of wetland ecosystems in the state of Kerala, India. We estimate visitation to individual wetlands state-wide and extend, for the first time to a developing region, the emerging application of cultural ecosystem services modelling using data from social media. The impacts of restoration of wetland areal extension and water quality improvement are explored as a means to inform more sustainable management strategies. Findings show that improving water quality to a level suitable for the preservation of wildlife and fisheries could increase annual visits by 350,000, an increase of 13% in wetland visits state-wide, while restoring previously encroached wetland area could result in a 7% increase in annual visits, corresponding to 49,000 visitors, in the Ashtamudi and Vembanad lakes alone, two large coastal Ramsar wetlands in Kerala. We discuss how passive crowdsourcing of social media data has the potential to improve current ecosystem service analyses and environmental management practices also in the context of developing countries.Keywords: coastal wetlands, cultural ecosystem services, India, passive crowdsourcing, social media, wetland restoration
Procedia PDF Downloads 156541 Carbon Based Wearable Patch Devices for Real-Time Electrocardiography Monitoring
Authors: Hachul Jung, Ahee Kim, Sanghoon Lee, Dahye Kwon, Songwoo Yoon, Jinhee Moon
Abstract:
We fabricated a wearable patch device including novel patch type flexible dry electrode based on carbon nanofibers (CNFs) and silicone-based elastomer (MED 6215) for real-time ECG monitoring. There are many methods to make flexible conductive polymer by mixing metal or carbon-based nanoparticles. In this study, CNFs are selected for conductive nanoparticles because carbon nanotubes (CNTs) are difficult to disperse uniformly in elastomer compare with CNFs and silver nanowires are relatively high cost and easily oxidized in the air. Wearable patch is composed of 2 parts that dry electrode parts for recording bio signal and sticky patch parts for mounting on the skin. Dry electrode parts were made by vortexer and baking in prepared mold. To optimize electrical performance and diffusion degree of uniformity, we developed unique mixing and baking process. Secondly, sticky patch parts were made by patterning and detaching from smooth surface substrate after spin-coating soft skin adhesive. In this process, attachable and detachable strengths of sticky patch are measured and optimized for them, using a monitoring system. Assembled patch is flexible, stretchable, easily skin mountable and connectable directly with the system. To evaluate the performance of electrical characteristics and ECG (Electrocardiography) recording, wearable patch was tested by changing concentrations of CNFs and thickness of the dry electrode. In these results, the CNF concentration and thickness of dry electrodes were important variables to obtain high-quality ECG signals without incidental distractions. Cytotoxicity test is conducted to prove biocompatibility, and long-term wearing test showed no skin reactions such as itching or erythema. To minimize noises from motion artifacts and line noise, we make the customized wireless, light-weight data acquisition system. Measured ECG Signals from this system are stable and successfully monitored simultaneously. To sum up, we could fully utilize fabricated wearable patch devices for real-time ECG monitoring easily.Keywords: carbon nanofibers, ECG monitoring, flexible dry electrode, wearable patch
Procedia PDF Downloads 185540 Energy System Analysis Using Data-Driven Modelling and Bayesian Methods
Authors: Paul Rowley, Adam Thirkill, Nick Doylend, Philip Leicester, Becky Gough
Abstract:
The dynamic performance of all energy generation technologies is impacted to varying degrees by the stochastic properties of the wider system within which the generation technology is located. This stochasticity can include the varying nature of ambient renewable energy resources such as wind or solar radiation, or unpredicted changes in energy demand which impact upon the operational behaviour of thermal generation technologies. An understanding of these stochastic impacts are especially important in contexts such as highly distributed (or embedded) generation, where an understanding of issues affecting the individual or aggregated performance of high numbers of relatively small generators is especially important, such as in ESCO projects. Probabilistic evaluation of monitored or simulated performance data is one technique which can provide an insight into the dynamic performance characteristics of generating systems, both in a prognostic sense (such as the prediction of future performance at the project’s design stage) as well as in a diagnostic sense (such as in the real-time analysis of underperforming systems). In this work, we describe the development, application and outcomes of a new approach to the acquisition of datasets suitable for use in the subsequent performance and impact analysis (including the use of Bayesian approaches) for a number of distributed generation technologies. The application of the approach is illustrated using a number of case studies involving domestic and small commercial scale photovoltaic, solar thermal and natural gas boiler installations, and the results as presented show that the methodology offers significant advantages in terms of plant efficiency prediction or diagnosis, along with allied environmental and social impacts such as greenhouse gas emission reduction or fuel affordability.Keywords: renewable energy, dynamic performance simulation, Bayesian analysis, distributed generation
Procedia PDF Downloads 495539 Algorithm for Predicting Cognitive Exertion and Cognitive Fatigue Using a Portable EEG Headset for Concussion Rehabilitation
Authors: Lou J. Pino, Mark Campbell, Matthew J. Kennedy, Ashleigh C. Kennedy
Abstract:
A concussion is complex and nuanced, with cognitive rest being a key component of recovery. Cognitive overexertion during rehabilitation from a concussion is associated with delayed recovery. However, daily living imposes cognitive demands that may be unavoidable and difficult to quantify. Therefore, a portable tool capable of alerting patients before cognitive overexertion occurs could allow patients to maintain their quality of life while preventing symptoms and recovery setbacks. EEG allows for a sensitive measure of cognitive exertion. Clinical 32-lead EEG headsets are not practical for day-to-day concussion rehabilitation management. However, there are now commercially available and affordable portable EEG headsets. Thus, these headsets can potentially be used to continuously monitor cognitive exertion during mental tasks to alert the wearer of overexertion, with the aim of preventing the occurrence of symptoms to speed recovery times. The objective of this study was to test an algorithm for predicting cognitive exertion from EEG data collected from a portable headset. EEG data were acquired from 10 participants (5 males, 5 females). Each participant wore a portable 4 channel EEG headband while completing 10 tasks: rest (eyes closed), rest (eyes open), three levels of the increasing difficulty of logic puzzles, three levels of increasing difficulty in multiplication questions, rest (eyes open), and rest (eyes closed). After each task, the participant was asked to report their perceived level of cognitive exertion using the NASA Task Load Index (TLX). Each participant then completed a second session on a different day. A customized machine learning model was created using data from the first session. The performance of each model was then tested using data from the second session. The mean correlation coefficient between TLX scores and predicted cognitive exertion was 0.75 ± 0.16. The results support the efficacy of the algorithm for predicting cognitive exertion. This demonstrates that the algorithms developed in this study used with portable EEG devices have the potential to aid in the concussion recovery process by monitoring and warning patients of cognitive overexertion. Preventing cognitive overexertion during recovery may reduce the number of symptoms a patient experiences and may help speed the recovery process.Keywords: cognitive activity, EEG, machine learning, personalized recovery
Procedia PDF Downloads 220538 Economics of Sugandhakokila (Cinnamomum Glaucescens (Nees) Dury) in Dang District of Nepal: A Value Chain Perspective
Authors: Keshav Raj Acharya, Prabina Sharma
Abstract:
Sugandhakokila (Cinnamomum glaucescens Nees. Dury) is a large evergreen native tree species; mostly confined naturally in mid-hills of Rapti Zone of Nepal. The species is identified as prioritized for agro-technology development as well as for research and development by a department of plant resources. This species is band for export outside the country without processing by the government of Nepal to encourage the value addition within the country. The present study was carried out in Chillikot village of Dang district to find out the economic contribution of C. glaucescens in the local economy and to document the major conservation threats for this species. Participatory Rural Appraisal (PRA) tools such as Household survey, key informants interviews and focus group discussions were carried out to collect the data. The present study reveals that about 1.7 million Nepalese rupees (NPR) have been contributed annually in the local economy of 29 households from the collection of C. glaucescens berries in the study area. The average annual income of each family was around NPR 67,165.38 (US$ 569.19) from the sale of the berries which contributes about 53% of the total household income. Six different value chain actors are involved in C. glaucescens business. Maximum profit margin was taken by collector followed by producer, exporter and processor. The profit margin was found minimum to regional and village traders. The total profit margin for producers was NPR 138.86/kg, and regional traders have gained NPR 17/kg. However, there is a possibility to increase the profit of producers by NPR 8.00 more for each kg of berries through the initiation of community forest user group and village cooperatives in the area. Open access resource, infestation by an insect to over matured trees and browsing by goats were identified as major conservation threats for this species. Handing over the national forest as a community forest, linking the producers with the processor through organized market channel and replacing the old tree through new plantation has been recommended for future.Keywords: community forest, conservation threats, C. glaucescens, value chain analysis
Procedia PDF Downloads 140537 Analytical Model of Multiphase Machines Under Electrical Faults: Application on Dual Stator Asynchronous Machine
Authors: Nacera Yassa, Abdelmalek Saidoune, Ghania Ouadfel, Hamza Houassine
Abstract:
The rapid advancement in electrical technologies has underscored the increasing importance of multiphase machines across various industrial sectors. These machines offer significant advantages in terms of efficiency, compactness, and reliability compared to their single-phase counterparts. However, early detection and diagnosis of electrical faults remain critical challenges to ensure the durability and safety of these complex systems. This paper presents an advanced analytical model for multiphase machines, with a particular focus on dual stator asynchronous machines. The primary objective is to develop a robust diagnostic tool capable of effectively detecting and locating electrical faults in these machines, including short circuits, winding faults, and voltage imbalances. The proposed methodology relies on an analytical approach combining electrical machine theory, modeling of magnetic and electrical circuits, and advanced signal analysis techniques. By employing detailed analytical equations, the developed model accurately simulates the behavior of multiphase machines in the presence of electrical faults. The effectiveness of the proposed model is demonstrated through a series of case studies and numerical simulations. In particular, special attention is given to analyzing the dynamic behavior of machines under different types of faults, as well as optimizing diagnostic and recovery strategies. The obtained results pave the way for new advancements in the field of multiphase machine diagnostics, with potential applications in various sectors such as automotive, aerospace, and renewable energies. By providing precise and reliable tools for early fault detection, this research contributes to improving the reliability and durability of complex electrical systems while reducing maintenance and operation costs.Keywords: faults, diagnosis, modelling, multiphase machine
Procedia PDF Downloads 64536 Count Data Regression Modeling: An Application to Spontaneous Abortion in India
Authors: Prashant Verma, Prafulla K. Swain, K. K. Singh, Mukti Khetan
Abstract:
Objective: In India, around 20,000 women die every year due to abortion-related complications. In the modelling of count variables, there is sometimes a preponderance of zero counts. This article concerns the estimation of various count regression models to predict the average number of spontaneous abortion among women in the Punjab state of India. It also assesses the factors associated with the number of spontaneous abortions. Materials and methods: The study included 27,173 married women of Punjab obtained from the DLHS-4 survey (2012-13). Poisson regression (PR), Negative binomial (NB) regression, zero hurdle negative binomial (ZHNB), and zero-inflated negative binomial (ZINB) models were employed to predict the average number of spontaneous abortions and to identify the determinants affecting the number of spontaneous abortions. Results: Statistical comparisons among four estimation methods revealed that the ZINB model provides the best prediction for the number of spontaneous abortions. Antenatal care (ANC) place, place of residence, total children born to a woman, woman's education and economic status were found to be the most significant factors affecting the occurrence of spontaneous abortion. Conclusions: The study offers a practical demonstration of techniques designed to handle count variables. Statistical comparisons among four estimation models revealed that the ZINB model provided the best prediction for the number of spontaneous abortions and is recommended to be used to predict the number of spontaneous abortions. The study suggests that women receive institutional Antenatal care to attain limited parity. It also advocates promoting higher education among women in Punjab, India.Keywords: count data, spontaneous abortion, Poisson model, negative binomial model, zero hurdle negative binomial, zero-inflated negative binomial, regression
Procedia PDF Downloads 155