Search results for: electrical machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4592

Search results for: electrical machine

212 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 237
211 The Effectiveness of Exercise Therapy on Decreasing Pain in Women with Temporomandibular Disorders and How Their Brains Respond: A Pilot Randomized Controlled Trial

Authors: Zenah Gheblawi, Susan Armijo-Olivo, Elisa B. Pelai, Vaishali Sharma, Musa Tashfeen, Angela Fung, Francisca Claveria

Abstract:

Due to physiological differences between men and women, pain is experienced differently between the two sexes. Chronic pain disorders, notably temporomandibular disorders (TMDs), disproportionately affect women in diagnosis, and pain severity in opposition of their male counterparts. TMDs are a type of musculoskeletal disorder that target the masticatory muscles, temporalis muscle, and temporomandibular joints, causing considerable orofacial pain which can usually be referred to the neck and back. Therapeutic methods are scarce, and are not TMD-centered, with the latest research suggesting that subjects with chronic musculoskeletal pain disorders have abnormal alterations in the grey matter of their brains which can be remedied with exercise, and thus, decreasing the pain experienced. The aim of the study is to investigate the effects of exercise therapy in TMD female patients experiencing chronic jaw pain and to assess the consequential effects on brain activity. In a randomized controlled trial, the effectiveness of an exercise program to improve brain alterations and clinical outcomes in women with TMD pain will be tested. Women with chronic TMD pain will be randomized to either an intervention arm or a placebo control group. Women in the intervention arm will receive 8 weeks of progressive exercise of motor control training using visual feedback (MCTF) of the cervical muscles, twice per week. Women in the placebo arm will receive innocuous transcutaneous electrical nerve stimulation during 8 weeks as well. The primary outcomes will be changes in 1) pain, measured with the Visual Analogue Scale, 2) brain structure and networks, measured by fractional anisotropy (brain structure) and the blood-oxygen level dependent signal (brain networks). Outcomes will be measured at baseline, after 8 weeks of treatment, and 4 months after treatment ends and will determine effectiveness of MCTF in managing TMD, through improved clinical outcomes. Results will directly inform and guide clinicians in prescribing more effective interventions for women with TMD. This study is underway, and no results are available at this point. The results of this study will have substantial implications on the advancement in understanding the scope of plasticity the brain has in regards with pain, and how it can be used to improve the treatment and pain of women with TMD, and more generally, other musculoskeletal disorders.

Keywords: exercise therapy, musculoskeletal disorders, physical therapy, rehabilitation, tempomandibular disorders

Procedia PDF Downloads 264
210 Formulation of Lipid-Based Tableted Spray-Congealed Microparticles for Zero Order Release of Vildagliptin

Authors: Hend Ben Tkhayat , Khaled Al Zahabi, Husam Younes

Abstract:

Introduction: Vildagliptin (VG), a dipeptidyl peptidase-4 inhibitor (DPP-4), was proven to be an active agent for the treatment of type 2 diabetes. VG works by enhancing and prolonging the activity of incretins which improves insulin secretion and decreases glucagon release, therefore lowering blood glucose level. It is usually used with various classes, such as insulin sensitizers or metformin. VG is currently only marketed as an immediate-release tablet that is administered twice daily. In this project, we aim to formulate an extended-release with a zero-order profile tableted lipid microparticles of VG that could be administered once daily ensuring the patient’s convenience. Method: The spray-congealing technique was used to prepare VG microparticles. Compritol® was heated at 10 oC above its melting point and VG was dispersed in the molten carrier using a homogenizer (IKA T25- USA) set at 13000 rpm. VG dispersed in the molten Compritol® was added dropwise to the molten Gelucire® 50/13 and PEG® (400, 6000, and 35000) in different ratios under manual stirring. The molten mixture was homogenized and Carbomer® amount was added. The melt was pumped through the two-fluid nozzle of the Buchi® Spray-Congealer (Buchi B-290, Switzerland) using a Pump drive (Master flex, USA) connected to a silicone tubing wrapped with silicone heating tape heated at the same temperature of the pumped mix. The physicochemical properties of the produced VG-loaded microparticles were characterized using Mastersizer, Scanning Electron Microscope (SEM), Differential Scanning Calorimeter (DSC) and X‐Ray Diffractometer (XRD). VG microparticles were then pressed into tablets using a single punch tablet machine (YDP-12, Minhua pharmaceutical Co. China) and in vitro dissolution study was investigated using Agilent Dissolution Tester (Agilent, USA). The dissolution test was carried out at 37±0.5 °C for 24 hours in three different dissolution media and time phases. The quantitative analysis of VG in samples was realized using a validated High-Pressure Liquid Chromatography (HPLC-UV) method. Results: The microparticles were spherical in shape with narrow distribution and smooth surface. DSC and XRD analyses confirmed the crystallinity of VG that was lost after being incorporated into the amorphous polymers. The total yields of the different formulas were between 70% and 80%. The VG content in the microparticles was found to be between 99% and 106%. The in vitro dissolution study showed that VG was released from the tableted particles in a controlled fashion. The adjustment of the hydrophilic/hydrophobic ratio of excipients, their concentration and the molecular weight of the used carriers resulted in tablets with zero-order kinetics. The Gelucire 50/13®, a hydrophilic polymer was characterized by a time-dependent profile with an important burst effect that was decreased by adding Compritol® as a lipophilic carrier to retard the release of VG which is highly soluble in water. PEG® (400,6000 and 35 000) were used for their gelling effect that led to a constant rate delivery and achieving a zero-order profile. Conclusion: Tableted spray-congealed lipid microparticles for extended-release of VG were successfully prepared and a zero-order profile was achieved.

Keywords: vildagliptin, spray congealing, microparticles, controlled release

Procedia PDF Downloads 100
209 Analysis of Metamaterial Permeability on the Performance of Loosely Coupled Coils

Authors: Icaro V. Soares, Guilherme L. F. Brandao, Ursula D. C. Resende, Glaucio L. Siqueira

Abstract:

Electrical energy can be wirelessly transmitted through resonant coupled coils that operate in the near-field region. Once in this region, the field has evanescent character, the efficiency of Resonant Wireless Power Transfer (RWPT) systems decreases proportionally with the inverse cube of distance between the transmitter and receiver coils. The commercially available RWPT systems are restricted to short and mid-range applications in which the distance between coils is lesser or equal to the coil size. An alternative to overcome this limitation is applying metamaterial structures to enhance the coupling between coils, thus reducing the field decay along the distance between them. Metamaterials can be conceived as composite materials with periodic or non-periodic structure whose unconventional electromagnetic behaviour is due to its unit cell disposition and chemical composition. This new kind of material has been used in frequency selective surfaces, invisibility cloaks, leaky-wave antennas, among other applications. However, for RWPT it is mainly applied as superlenses which are lenses that can overcome the optical limitation and are made of left-handed media, that is, a medium with negative magnetic permeability and electric permittivity. As RWPT systems usually operate at wavelengths of hundreds of meters, the metamaterial unit cell size is much smaller than the wavelength. In this case, electric and magnetic field are decoupled, therefore the double negative condition for superlenses are not required and the negative magnetic permeability is enough to produce an artificial magnetic medium. In this work, the influence of the magnetic permeability of a metamaterial slab inserted between two loosely coupled coils is studied in order to find the condition that leads to the maximum transmission efficiency. The metamaterial used is formed by a subwavelength unit cell that consist of a capacitor-loaded split ring with an inner spiral that is designed and optimized using the software Computer Simulation Technology. The unit cell permeability is experimentally characterized by the ratio of the transmission parameters between coils measured with and without the presence of the metamaterial slab. Early measurements results show that the transmission coefficient at the resonant frequency after the inclusion of the metamaterial is about three times higher than with just the two coils, which confirms the enhancement that this structure brings to RWPT systems.

Keywords: electromagnetic lens, loosely coupled coils, magnetic permeability, metamaterials, resonant wireless power transfer, subwavelength unit cells

Procedia PDF Downloads 124
208 Polypeptide Modified Carbon Nanotubes – Mediated GFP Gene Transfection for H1299 Cells and Toxicity Assessment

Authors: Pei-Ying Lo, Jing-Hao Ciou, Kai-Cheng Yang, Jia-Huei Zheng, Shih-Hsiang Huang, Kuen-Chan Lee, Er-Chieh Cho

Abstract:

As-produced CNTs are insoluble in all organic solvents and aqueous solutions have imposed limitations to the use of CNTs. Therefore, how to debundle carbon nanotubes and to modify them for further uses is an important issue. There are several methods for the dispersion of CNTs in water using covalent attachment of hydrophilic groups to the surface of tubes. These methods, however, alter the electronic structure of the nanotubes by disrupting the network of sp2 hybridized carbons. In order to keep the nanotubes’ intrinsic mechanical and electrical properties intact, non-covalent interactions are increasingly being explored as an alternative route for dispersion. Apart from conventional surfactants such as sodium dodecylsulfate (SDS) or sodium dodecylbenzenesulfonate (SDBS) which are highly effective in dispersing CNTs, biopolymers have received much attention as dispersing agents due to the anticipated biocompatibility of the dispersed CNTs. Also, The pyrenyl group is known to interact strongly with the basal plane of graphene via π-stacking. In this study, a highly re-dispersible biopolymer is reported for the synthesis of pyrene-modified poly-L-lysine (PBPL) and poly(D-Glu, D-Lys) (PGLP). To provide the evidence of the safety of the PBPL/CNT & PGLP/CNT materials we use in this study, H1299 and HCT116 cells were incubated with PBPL/CNT & PGLP/CNT materials for toxicity analysis, MTS assays. The results from MTS assays indicated that no significant cellular toxicity was shown in H1299 and HCT116 cells. Furthermore, the fluorescence marker fluorescein isothiocyanate (FITC) was added to PBPL & PGLP dispersions. From the fluorescent measurements showed that the chemical functionalisation of the PBPL/CNT & PGLP/CNT conjugates with the fluorescence marker were successful. The fluorescent PBPL/CNT & PGLP/CNT conjugates could find application in medical imaging. In the next step, the GFP gene is immobilized onto PBPL/CNT conjugates by introducing electrostatic interaction. GFP-transfected cells that emitted fluorescence were imaged and counted under a fluorescence microscope. Due to the unique biocompatibility of PBPL modified CNTs, the GFP gene could be transported into H1299 cells without using antibodies. The applicability of such soluble and chemically functionalised polypeptide/CNT conjugates in biomedicine is currently investigated. We expect that this polypeptide/CNT system will be a safe and multi-functional nanomedical delivery platform and contribute to future medical therapy.

Keywords: carbon nanotube, nanotoxicology, GFP transfection, polypeptide/CNT hybrids

Procedia PDF Downloads 320
207 Water Quality in Buyuk Menderes Graben, Turkey

Authors: Tugbanur Ozen Balaban, Gultekin Tarcan, Unsal Gemici, Mumtaz Colak, I. Hakki Karamanderesi

Abstract:

Buyuk Menderes Graben is located in the Western Anatolia (Turkey). The graben has become the largest industrial and agricultural area with a total population exceeding 3.000.000. There are two big cities within the study areas from west to east as Aydın and Denizli. The study area is very rich with regard to cold ground waters and thermal waters. Electrical production using geothermal potential has become very popular in the last decades in this area. Buyuk Menderes Graben is a tectonically active extensional region and is undergoing a north–south extensional tectonic regime which commenced at the latest during Early Middle Miocene period. The basement of the study area consists of Menderes massif rocks that are made up of high-to low-grade metamorphics and they are aquifer for both cold ground waters and thermal waters depending on the location. Neogene terrestrial sediments, which are mainly composed by alluvium fan deposits unconformably cover the basement rocks in different facies have very low permeability and locally may act as cap rocks for the geothermal systems. The youngest unit is Quaternary alluvium which is the shallow regional aquifer consists of Holocene alluvial deposits in the study area. All the waters are of meteoric origin and reflect shallow or deep circulation according to the 8O, 2H and 3H contents. Meteoric waters move to deep zones by fractured system and rise to the surface along the faults. Water samples (drilling well, spring and surface waters) and local seawater were collected between 2010 and 2012 years. Geochemical modeling was calculated distribution of the aqueous species and exchange processes by using PHREEQCi speciation code. Geochemical analyses show that cold ground water types are evolving from Ca–Mg–HCO3 to Na–Cl–SO4 and geothermal aquifer waters reflect the water types of Na-Cl-HCO3 in Aydın. Water types of Denizli are Ca-Mg-HCO3 and Ca-Mg-HCO3-SO4. Thermal water types reflect generally Na-HCO3-SO4. The B versus Cl rates increase from east to west with the proportion of seawater introduced into the fresh water aquifers and geothermal reservoirs. Concentrations of some elements (As, B, Fe and Ni) are higher than the tolerance limit of the drinking water standard of Turkey (TS 266) and international drinking water standards (WHO, FAO etc).

Keywords: Buyuk Menderes, isotope chemistry, geochemical modelling, water quality

Procedia PDF Downloads 509
206 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel

Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler

Abstract:

Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.

Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process

Procedia PDF Downloads 93
205 Creating Renewable Energy Investment Portfolio in Turkey between 2018-2023: An Approach on Multi-Objective Linear Programming Method

Authors: Berker Bayazit, Gulgun Kayakutlu

Abstract:

The World Energy Outlook shows that energy markets will substantially change within a few forthcoming decades. First, determined action plans according to COP21 and aim of CO₂ emission reduction have already impact on policies of countries. Secondly, swiftly changed technological developments in the field of renewable energy will be influential upon medium and long-term energy generation and consumption behaviors of countries. Furthermore, share of electricity on global energy consumption is to be expected as high as 40 percent in 2040. Electrical vehicles, heat pumps, new electronical devices and digital improvements will be outstanding technologies and innovations will be the testimony of the market modifications. In order to meet highly increasing electricity demand caused by technologies, countries have to make new investments in the field of electricity production, transmission and distribution. Specifically, electricity generation mix becomes vital for both prevention of CO₂ emission and reduction of power prices. Majority of the research and development investments are made in the field of electricity generation. Hence, the prime source diversity and source planning of electricity generation are crucial for improving the wealth of citizen life. Approaches considering the CO₂ emission and total cost of generation, are necessary but not sufficient to evaluate and construct the product mix. On the other hand, employment and positive contribution to macroeconomic values are important factors that have to be taken into consideration. This study aims to constitute new investments in renewable energies (solar, wind, geothermal, biogas and hydropower) between 2018-2023 under 4 different goals. Therefore, a multi-objective programming model is proposed to optimize the goals of minimizing the CO₂ emission, investment amount and electricity sales price while maximizing the total employment and positive contribution to current deficit. In order to avoid the user preference among the goals, Dinkelbach’s algorithm and Guzel’s approach have been combined. The achievements are discussed with comparison to the current policies. Our study shows that new policies like huge capacity allotment might be discussible although obligation for local production is positive. The improvements in grid infrastructure and re-design support for the biogas and geothermal can be recommended.

Keywords: energy generation policies, multi-objective linear programming, portfolio planning, renewable energy

Procedia PDF Downloads 217
204 Partial M-Sequence Code Families Applied in Spectral Amplitude Coding Fiber-Optic Code-Division Multiple-Access Networks

Authors: Shin-Pin Tseng

Abstract:

Nowadays, numerous spectral amplitude coding (SAC) fiber-optic code-division-multiple-access (FO-CDMA) techniques were appealing due to their capable of providing moderate security and relieving the effects of multiuser interference (MUI). Nonetheless, the performance of the previous network is degraded due to fixed in-phase cross-correlation (IPCC) value. Based on the above problems, a new SAC FO-CDMA network using partial M-sequence (PMS) code is presented in this study. Because the proposed PMS code is originated from M-sequence code, the system using the PMS code could effectively suppress the effects of MUI. In addition, two-code keying (TCK) scheme can applied in the proposed SAC FO-CDMA network and enhance the whole network performance. According to the consideration of system flexibility, simple optical encoders/decoders (codecs) using fiber Bragg gratings (FBGs) were also developed. First, we constructed a diagram of the SAC FO-CDMA network, including (N/2-1) optical transmitters, (N/2-1) optical receivers, and one N×N star coupler for broadcasting transmitted optical signals to arrive at the input port of each optical receiver. Note that the parameter N for the PMS code was the code length. In addition, the proposed SAC network was using superluminescent diodes (SLDs) as light sources, which then can save a lot of system cost compared with the other FO-CDMA methods. For the design of each optical transmitter, it is composed of an SLD, one optical switch, and two optical encoders according to assigned PMS codewords. On the other hand, each optical receivers includes a 1 × 2 splitter, two optical decoders, and one balanced photodiode for mitigating the effect of MUI. In order to simplify the next analysis, the some assumptions were used. First, the unipolarized SLD has flat power spectral density (PSD). Second, the received optical power at the input port of each optical receiver is the same. Third, all photodiodes in the proposed network have the same electrical properties. Fourth, transmitting '1' and '0' has an equal probability. Subsequently, by taking the factors of phase‐induced intensity noise (PIIN) and thermal noise, the corresponding performance was displayed and compared with the performance of the previous SAC FO-CDMA networks. From the numerical result, it shows that the proposed network improved about 25% performance than that using other codes at BER=10-9. This is because the effect of PIIN was effectively mitigated and the received power was enhanced by two times. As a result, the SAC FO-CDMA network using PMS codes has an opportunity to apply in applications of the next-generation optical network.

Keywords: spectral amplitude coding, SAC, fiber-optic code-division multiple-access, FO-CDMA, partial M-sequence, PMS code, fiber Bragg grating, FBG

Procedia PDF Downloads 353
203 PbLi Activation Due to Corrosion Products in WCLL BB (EU-DEMO) and Its Impact on Reactor Design and Recycling

Authors: Nicole Virgili, Marco Utili

Abstract:

The design of the Breeding Blanket in Tokamak fusion energy systems has to guarantee sufficient availability in addition to its functions, that are, tritium breeding self-sufficiency, power extraction and shielding (the magnets and the VV). All these function in the presence of extremely harsh operating conditions in terms of heat flux and neutron dose as well as chemical environment of the coolant and breeder that challenge structural materials (structural resistance and corrosion resistance). The movement and activation of fluids from the BB to the Ex-vessel components in a fusion power plant have an important radiological consideration because flowing material can carry radioactivity to safety-critical areas. This includes gamma-ray emission from activated fluid and activated corrosion products, and secondary activation resulting from neutron emission, with implication for the safety of maintenance personnel and damage to electrical and electronic equipment. In addition to the PbLi breeder activation, it is important to evaluate the contribution due to the activated corrosion products (ACPs) dissolved in the lead-lithium eutectic alloy, at different concentration levels. Therefore, the purpose of the study project is to evaluate the PbLi activity utilizing the FISPACT II inventory code. Emphasis is given on how the design of the EU-DEMO WCLL, and potential recycling of the breeder material will be impacted by the activation of PbLi and the associated active corrosion products (ACPs). For this scope the following Computational Tools, Data and Geometry have been considered: • Neutron source: EU-DEMO neutron flux < 1014/cm2/s • Neutron flux distribution in equatorial breeding blanket module (BBM) #13 in the WCLL BB outboard central zone, which is the most activated zone, with the aim to introduce a conservative component utilizing MNCP6. • The recommended geometry model: 2017 EU DEMO CAD model. • Blanket Module Material Specifications (Composition) • Activation calculations for different ACP concentration levels in the PbLi breeder, with a given chemistry in stationary equilibrium conditions, using FISPACT II code. Results suggest that there should be a waiting time of about 10 years from the shut-down (SD) to be able to safely manipulate the PbLi for recycling operations with simple shielding requirements. The dose rate is mainly given by the PbLi and the ACP concentration (x1 or x 100) does not shift the result. In conclusion, the results show that there is no impact on PbLi activation due to ACPs levels.

Keywords: activation, corrosion products, recycling, WCLL BB., PbLi

Procedia PDF Downloads 86
202 Model of Pharmacoresistant Blood-Brain Barrier In-vitro for Prediction of Transfer of Potential Antiepileptic Drugs

Authors: Emílie Kučerová, Tereza Veverková, Marina Morozovová, Eva Kudová, Jitka Viktorová

Abstract:

The blood-brain barrier (BBB) is a key element regulating the transport of substances between the blood and the central nervous system (CNS). The BBB protects the CNS from potentially harmful substances and maintains a suitable environment for nervous activity in the CNS, but at the same time, it represents a significant obstacle to the entry of drugs into the CNS. Pharmacoresistant epilepsy is a form of epilepsy that cannot be suppressed using two (or more) appropriately chosen antiepileptic drugs. In many cases, pharmacoresistant epilepsy is characterized by an increased concentration of efflux pumps on the luminal sides of the endothelial cells that form the BBB and an increased number of drug-metabolizing enzymes in the BBB cells, thereby preventing the effective transport of antiepileptic drugs into the CNS. Currently, a number of scientific groups are focusing on the preparation and improvement of BBB models in vitro in order to study cell interactions or transport mechanisms. However, in pathological conditions such as pharmacoresistant epilepsy, there are changes in BBB structure, and current BBB models are insufficient for related research. Our goal is to develop a suitable BBB model for pharmacoresistant epilepsy in vitro and use it to test the transfer of potential antiepileptic drugs. This model is created by co-culturing immortalized human cerebral microvascular endothelial cells, human vascular pericytes and immortalized human astrocytes. The BBB in vitro is cultivated in the form of a 2D transwell model and the integrity of the barrier is verified by measuring transendothelial electrical resistance (TEER). From the current results, a contact cell arrangement with the cultivation of endothelial cells on the upper side of the insert and the co-cultivation of astrocytes and pericytes on the lower side of the insert is selected as the most promising for BBB model cultivation. The pharmacoresistance of the BBB model is achieved by long-term cultivation of endothelial cells in an increasing concentration of selected antiepileptic drugs, which should lead to increased production of efflux pumps and drug-metabolizing enzymes. The pharmacoresistant BBB model in vitro will be further used for the screening of substances that could act both as antiepileptics and at the same time as inhibitors of efflux pumps in endothelial cells. This project was supported by the Technology Agency of the Czech Republic (TACR), Personalized Medicine: Translational research towards biomedical applications, No. TN02000109 and by the Academy of Sciences of the Czech Republic (AS CR) – grant RVO 61388963.

Keywords: antiepileptic drugs, blood-brain barrier, efflux transporters, pharmacoresistance

Procedia PDF Downloads 31
201 Musculoskeletal Disorders among Employees of an Assembly Industrial Workshop: Biomechanical Constrain’s Semi-Quantitative Analysis

Authors: Lamia Bouzgarrou, Amira Omrane, Haithem Kalel, Salma Kammoun

Abstract:

Background: During recent decades, mechanical and electrical industrial sector has greatly expanded with a significant employability potential. However, this sector faces the increasing prevalence of musculoskeletal disorders with heavy consequences associated with direct and indirect costs. Objective: The current intervention was motivated by large musculoskeletal upper limbs and back disorders frequency among the operators of an assembly workshop in a leader company specialized in sanitary equipment and water and gas connections. We aimed to identify biomechanical constraints among these operators through activity and biomechanical exposures semi-quantitative analysis based on video recordings and MUSKA-TMS software. Methods: We conducted, open observations and exploratory interviews at first, in order to overall understand work situation. Then, we analyzed operator’s activity through systematic observations and interviews. Finally, we conducted a semi-quantitative biomechanical constraints analysis with MUSKA-TMS software after representative activity period video recording. The assessment of biomechanical constrains was based on different criteria; biomechanical characteristics (work positions), aggravating factor (cold, vibration, stress, etc.) and exposure time (duration and frequency of solicitations, recovery phase); with a synthetic score of risk level variable from 1 to 4 (1: low risk of developing MSD and 4: high risk). Results: Semi-quantitative analysis objective many elementary operations with higher biomechanical constrains like high repetitiveness, insufficient recovery time and constraining angulation of shoulders, wrists and cervical spine. Among these risky elementary operations we sited the assembly of sleeve with the body, the assembly of axis, and the control on testing table of gas valves. Transformation of work situations were recommended, covering both the redevelopment of industrial areas and the integration of new tools and equipment of mechanical handling that reduces operator exposure to vibration. Conclusion: Musculoskeletal disorders are complex and costly disorders. Moreover, an approach centered on the observation of the work can promote the interdisciplinary dialogue and exchange between actors with the objective to maximize the performance of a company and improve the quality of life of operators.

Keywords: musculoskeletal disorders, biomechanical constrains, semi-quantitative analysis, ergonomics

Procedia PDF Downloads 126
200 Photovoltaic-Driven Thermochemical Storage for Cooling Applications to Be Integrated in Polynesian Microgrids: Concept and Efficiency Study

Authors: Franco Ferrucci, Driss Stitou, Pascal Ortega, Franck Lucas

Abstract:

The energy situation in tropical insular regions, as found in the French Polynesian islands, presents a number of challenges, such as high dependence on imported fuel, high transport costs from the mainland and weak electricity grids. Alternatively, these regions have a variety of renewable energy resources, which favor the exploitation of smart microgrids and energy storage technologies. With regards to the electrical energy demand, the high temperatures in these regions during the entire year implies that a large proportion of consumption is used for cooling buildings, even during the evening hours. In this context, this paper presents an air conditioning system driven by photovoltaic (PV) electricity that combines a refrigeration system and a thermochemical storage process. Thermochemical processes are able to store energy in the form of chemical potential with virtually no losses, and this energy can be used to produce cooling during the evening hours without the need to run a compressor (thus no electricity is required). Such storage processes implement thermochemical reactors in which a reversible chemical reaction between a solid compound and a gas takes place. The solid/gas pair used in this study is BaCl2 reacting with ammonia (NH3), which is also the coolant fluid in the refrigeration circuit. In the proposed system, the PV-driven electric compressor is used during the daytime either to run the refrigeration circuit when a cooling demand occurs or to decompose the ammonia-charged salt and remove the gas from thermochemical reactor when no cooling is needed. During the evening, when there is no electricity from solar source, the system changes its configuration and the reactor reabsorbs the ammonia gas from the evaporator and produces the cooling effect. In comparison to classical PV-driven air conditioning units equipped with electrochemical batteries (e.g. Pb, Li-ion), the proposed system has the advantage of having a novel storage technology with a much longer charge/discharge life cycle, and no self-discharge. It also allows a continuous operation of the electric compressor during the daytime, thus avoiding the problems associated with the on-off cycling. This work focuses on the system concept and on the efficiency study of its main components. It also compares the thermochemical with electrochemical storage as well as with other forms of thermal storage, such as latent (ice) and sensible heat (chilled water). The preliminary results show that the system seems to be a promising alternative to simultaneously fulfill cooling and energy storage needs in tropical insular regions.

Keywords: microgrid, solar air-conditioning, solid/gas sorption, thermochemical storage, tropical and insular regions

Procedia PDF Downloads 209
199 Luminescent Dye-Doped Polymer Nanofibers Produced by Electrospinning Technique

Authors: Monica Enculescu, A. Evanghelidis, I. Enculescu

Abstract:

Among the numerous methods for obtaining polymer nanofibers, the electrospinning technique distinguishes itself due to the more growing interest induced by its proved utility leading to developing and improving of the method and the appearance of novel materials. In particular, production of polymeric nanofibers in which different dopants are introduced was intensively studied in the last years because of the increased interest for the obtaining of functional electrospun nanofibers. Electrospinning is a facile method of obtaining polymer nanofibers with diameters from tens of nanometers to micrometrical sizes that are cheap, flexible, scalable, functional and biocompatible. Besides the multiple applications in medicine, polymeric nanofibers obtained by electrospinning permit manipulation of light at nanometric dimensions when doped with organic dyes or different nanoparticles. It is a simple technique that uses an electrical field to draw fine polymer nanofibers from solutions and does not require complicated devices or high temperatures. Different morphologies of the electrospun nanofibers can be obtained for the same polymeric host when different parameters of the electrospinning process are used. Consequently, we can obtain tuneable optical properties of the electrospun nanofibers (e.g. changing the wavelength of the emission peak) by varying the parameters of the fabrication method. We focus on obtaining doped polymer nanofibers with enhanced optical properties using the electrospinning technique. The aim of the paper is to produce dye-doped polymer nanofibers’ mats incorporating uniformly dispersed dyes. Transmission and fluorescence of the fibers will be evaluated by spectroscopy methods. The morphological properties of the electrospun dye-doped polymer fibers will be evaluated using scanning electron microscopy (SEM). We will tailor the luminescent properties of the material by doping the polymer (polyvinylpyrrolidone or polymethylmetacrilate) with different dyes (coumarins, rhodamines and sulforhodamines). The tailoring will be made taking into consideration the possibility of changing the luminescent properties of electrospun polymeric nanofibers that are doped with different dyes by using different parameters for the electrospinning technique (electric voltage, distance between electrodes, flow rate of the solution, etc.). Furthermore, we can evaluated the influence of the concentration of the dyes on the emissive properties of dye-doped polymer nanofibers using different concentrations. The advantages offered by the electrospinning technique when producing polymeric fibers are given by the simplicity of the method, the tunability of the morphology allowed by the possibility of controlling all the process parameters (temperature, viscosity of polymeric solution, applied voltage, distance between electrodes, etc.), and by the absence of necessity of using harsh and supplementary chemicals such as the ones used in the traditional nanofabrication techniques. Acknowledgments: The authors acknowledge the financial support received through IFA CEA Project No. C5-08/2016.

Keywords: electrospinning, luminescence, polymer nanofibers, scanning electron microscopy

Procedia PDF Downloads 179
198 Photovoltaic Modules Fault Diagnosis Using Low-Cost Integrated Sensors

Authors: Marjila Burhanzoi, Kenta Onohara, Tomoaki Ikegami

Abstract:

Faults in photovoltaic (PV) modules should be detected to the greatest extent as early as possible. For that conventional fault detection methods such as electrical characterization, visual inspection, infrared (IR) imaging, ultraviolet fluorescence and electroluminescence (EL) imaging are used, but they either fail to detect the location or category of fault, or they require expensive equipment and are not convenient for onsite application. Hence, these methods are not convenient to use for monitoring small-scale PV systems. Therefore, low cost and efficient inspection techniques with the ability of onsite application are indispensable for PV modules. In this study in order to establish efficient inspection technique, correlation between faults and magnetic flux density on the surface is of crystalline PV modules are investigated. Magnetic flux on the surface of normal and faulted PV modules is measured under the short circuit and illuminated conditions using two different sensor devices. One device is made of small integrated sensors namely 9-axis motion tracking sensor with a 3-axis electronic compass embedded, an IR temperature sensor, an optical laser position sensor and a microcontroller. This device measures the X, Y and Z components of the magnetic flux density (Bx, By and Bz) few mm above the surface of a PV module and outputs the data as line graphs in LabVIEW program. The second device is made of a laser optical sensor and two magnetic line sensor modules consisting 16 pieces of magnetic sensors. This device scans the magnetic field on the surface of PV module and outputs the data as a 3D surface plot of the magnetic flux intensity in a LabVIEW program. A PC equipped with LabVIEW software is used for data acquisition and analysis for both devices. To show the effectiveness of this method, measured results are compared to those of a normal reference module and their EL images. Through the experiments it was confirmed that the magnetic field in the faulted areas have different profiles which can be clearly identified in the measured plots. Measurement results showed a perfect correlation with the EL images and using position sensors it identified the exact location of faults. This method was applied on different modules and various faults were detected using it. The proposed method owns the ability of on-site measurement and real-time diagnosis. Since simple sensors are used to make the device, it is low cost and convenient to be sued by small-scale or residential PV system owners.

Keywords: fault diagnosis, fault location, integrated sensors, PV modules

Procedia PDF Downloads 202
197 Baseline Study of Water Quality in Indonesia Using Dynamic Methods and Technologies

Authors: R. L. P. de Lima, F. C. B. Boogaard, D. Setyo Rini, P. Arisandi, R. E. de Graaf-Van Dinther

Abstract:

Water quality in many Asian countries is very poor due to inefficient solid waste management, high population growth and the lack of sewage and purification systems for households and industry. A consortium of Indonesian and Dutch organizations has begun a large-scale international research project to evaluate and propose solutions to face the surface water pollution challenges in Brantas Basin, Indonesia (East Java: Malang / Surabaya). The first phase of the project consisted in a baseline study to assess the current status of surface water bodies and to determine the ambitions and strategies among local stakeholders. This study was conducted with high participatory / collaborative and knowledge sharing objectives. Several methods such as using mobile sensors (attached to boats or underwater drones), test strips and mobile apps, bio-monitoring (sediments), ecology scans using underwater cameras, or continuous / static measurements, were applied in different locations in the regions of the basin, at multiple locations within the water systems (e.g. spring, upstream / downstream of industry and urban areas, mouth of the Surabaya River, groundwater). Results gave an indication of (reference) values of basic water quality parameters such as turbidity, electrical conductivity, dissolved oxygen or nutrients (ammonium / nitrate). An important outcome was that collecting random samples may not be representative of a body of water, given that water quality parameters can vary widely in space (x, y, and depth) and time (day / night and seasonal). Innovative / dynamic monitoring methods (e.g. underwater drones, sensors on boats) can contribute to better understand the quality of the living environment (water, ecology, sediment) and factors that affect it. The field work activities, in particular, underwater drones, revealed potential as awareness actions as they attracted interest from locals and local press. This baseline study involved the cooperation with local managing organizations with Dutch partners, and their willingness to work together is important to ensure participatory actions and social awareness regarding the process of adaptation and strengthening of regulations, or for the construction of facilities such as sewage.

Keywords: water quality monitoring, pollution, underwater drones, social awareness

Procedia PDF Downloads 168
196 Improved Traveling Wave Method Based Fault Location Algorithm for Multi-Terminal Transmission System of Wind Farm with Grounding Transformer

Authors: Ke Zhang, Yongli Zhu

Abstract:

Due to rapid load growths in today’s highly electrified societies and the requirement for green energy sources, large-scale wind farm power transmission system is constantly developing. This system is a typical multi-terminal power supply system, whose structure of the network topology of transmission lines is complex. What’s more, it locates in the complex terrain of mountains and grasslands, thus increasing the possibility of transmission line faults and finding the fault location with difficulty after the faults and resulting in an extremely serious phenomenon of abandoning the wind. In order to solve these problems, a fault location method for multi-terminal transmission line based on wind farm characteristics and improved single-ended traveling wave positioning method is proposed. Through studying the zero sequence current characteristics by using the characteristics of the grounding transformer(GT) in the existing large-scale wind farms, it is obtained that the criterion for judging the fault interval of the multi-terminal transmission line. When a ground short-circuit fault occurs, there is only zero sequence current on the path between GT and the fault point. Therefore, the interval where the fault point exists is obtained by determining the path of the zero sequence current. After determining the fault interval, The location of the short-circuit fault point is calculated by the traveling wave method. However, this article uses an improved traveling wave method. It makes the positioning accuracy more accurate by combining the single-ended traveling wave method with double-ended electrical data. What’s more, a method of calculating the traveling wave velocity is deduced according to the above improvements (it is the actual wave velocity in theory). The improvement of the traveling wave velocity calculation method further improves the positioning accuracy. Compared with the traditional positioning method, the average positioning error of this method is reduced by 30%.This method overcomes the shortcomings of the traditional method in poor fault location of wind farm transmission lines. In addition, it is more accurate than the traditional fixed wave velocity method in the calculation of the traveling wave velocity. It can calculate the wave velocity in real time according to the scene and solve the traveling wave velocity can’t be updated with the environment and real-time update. The method is verified in PSCAD/EMTDC.

Keywords: grounding transformer, multi-terminal transmission line, short circuit fault location, traveling wave velocity, wind farm

Procedia PDF Downloads 227
195 Unmasking Virtual Empathy: A Philosophical Examination of AI-Mediated Emotional Practices in Healthcare

Authors: Eliana Bergamin

Abstract:

This philosophical inquiry, influenced by the seminal works of Annemarie Mol and Jeannette Pols, critically examines the transformative impact of artificial intelligence (AI) on emotional caregiving practices within virtual healthcare. Rooted in the traditions of philosophy of care, philosophy of emotions, and applied philosophy, this study seeks to unravel nuanced shifts in the moral and emotional fabric of healthcare mediated by AI-powered technologies. Departing from traditional empirical studies, the approach embraces the foundational principles of care ethics and phenomenology, offering a focused exploration of the ethical and existential dimensions of AI-mediated emotional caregiving. At its core, this research addresses the introduction of AI-powered technologies mediating emotional and care practices in the healthcare sector. By drawing on Mol and Pols' insights, the study offers a focused exploration of the ethical and existential dimensions of AI-mediated emotional caregiving. Anchored in ethnographic research within a pioneering private healthcare company in the Netherlands, this critical philosophical inquiry provides a unique lens into the dynamics of AI-mediated emotional practices. The study employs in-depth, semi-structured interviews with virtual caregivers and care receivers alongside ongoing ethnographic observations spanning approximately two and a half months. Delving into the lived experiences of those at the forefront of this technological evolution, the research aims to unravel subtle shifts in the emotional and moral landscape of healthcare, critically examining the implications of AI in reshaping the philosophy of care and human connection in virtual healthcare. Inspired by Mol and Pols' relational approach, the study prioritizes the lived experiences of individuals within the virtual healthcare landscape, offering a deeper understanding of the intertwining of technology, emotions, and the philosophy of care. In the realm of philosophy of care, the research elucidates how virtual tools, particularly those driven by AI, mediate emotions such as empathy, sympathy, and compassion—the bedrock of caregiving. Focusing on emotional nuances, the study contributes to the broader discourse on the ethics of care in the context of technological mediation. In the philosophy of emotions, the investigation examines how the introduction of AI alters the phenomenology of emotional experiences in caregiving. Exploring the interplay between human emotions and machine-mediated interactions, the nuanced analysis discerns implications for both caregivers and caretakers, contributing to the evolving understanding of emotional practices in a technologically mediated healthcare environment. Within applied philosophy, the study transcends empirical observations, positioning itself as a reflective exploration of the moral implications of AI in healthcare. The findings are intended to inform ethical considerations and policy formulations, bridging the gap between technological advancements and the enduring values of caregiving. In conclusion, this focused philosophical inquiry aims to provide a foundational understanding of the evolving landscape of virtual healthcare, drawing on the works of Mol and Pols to illuminate the essence of human connection, care, and empathy amid technological advancements.

Keywords: applied philosophy, artificial intelligence, healthcare, philosophy of care, philosophy of emotions

Procedia PDF Downloads 28
194 Preserving the Cultural Values of the Mararoa River and Waipuna–Freshwater Springs, Southland New Zealand: An Integration of Traditional and Scientific Knowledge

Authors: Erine van Niekerk, Jason Holland

Abstract:

In Māori culture water is considered to be the foundation of all life and has its own mana (spiritual power) and mauri (life force). Water classification for cultural values therefore includes categories like waitapu (sacred water), waimanawa-whenua (water from under the land), waipuna (freshwater springs), the relationship between water quantity and quality and the relationship between surface and groundwater. Particular rivers and lakes have special significance to iwi and hapu for their rohe (tribal areas). The Mararoa River, including its freshwater springs and wetlands, is an example of such an area. There is currently little information available about the sources, characteristics and behavior of these important water resources and this study on the water quality of the Mararoa River and adjacent freshwater springs will provide valuable information to be used in informed decisions about water management. The regional council of Southland, Environment Southland, is required to make changes under their water quality policy in order to comply with the requirements for the New National Standards for Freshwater to consult with Maori to determine strategies for decision making. This requires an approach that includes traditional knowledge combined with scientific knowledge in the decision-making process. This study provided the scientific data that can be used in future for decision making on fresh water springs combined with traditional values for this particular area. Several parameters have been tested in situ as well as in a laboratory. Parameters such as temperature, salinity, electrical conductivity, Total Dissolved Solids, Total Kjeldahl Nitrogen, Total Phosphorus, Total Suspended Solids, and Escherichia coli among others show that recorded values of all test parameters fall within recommended ANZECC guidelines and Environment Southland standards and do not raise any concerns for the water quality of the springs and the river at the moment. However, the destruction of natural areas, particularly due to changes in farming practices, and the changes to water quality by the introduction of Didymosphenia geminate (Didymo) means Māori have already lost many of their traditional mahinga kai (food sources). There is a major change from land use such as sheep farming to dairying in Southland which puts freshwater resources under pressure. It is, therefore, important to draw on traditional knowledge and spirituality alongside scientific knowledge to protect the waters of the Mararoa River and waipuna. This study hopes to contribute to scientific knowledge to preserve the cultural values of these significant waters.

Keywords: cultural values, freshwater springs, Maori, water quality

Procedia PDF Downloads 250
193 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor

Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng

Abstract:

Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.

Keywords: electrohysterogram, feature, preterm labor, term labor

Procedia PDF Downloads 531
192 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 92
191 Imaging Spectrum of Central Nervous System Tuberculosis on Magnetic Resonance Imaging: Correlation with Clinical and Microbiological Results

Authors: Vasundhara Arora, Anupam Jhobta, Suresh Thakur, Sanjiv Sharma

Abstract:

Aims and Objectives: Intracranial tuberculosis (TB) is one of the most devastating manifestations of TB and a challenging public health issue of considerable importance and magnitude world over. This study elaborates on the imaging spectrum of neurotuberculosis on magnetic resonance imaging (MRI) in 29 clinically suspected cases from a tertiary care hospital. Materials and Methods: The prospective hospital based evaluation of MR imaging features of neuro-tuberculosis in 29 clinically suspected cases was carried out in Department of Radio-diagnosis, Indira Gandhi Medical Hospital from July 2017 to August 2018. MR Images were obtained on a 1.5 T Magnetom Avanto machine and were analyzed to identify any abnormal meningeal enhancement or parenchymal lesions. Microbiological and Biochemical CSF analysis was performed in radio-logically suspected cases and the results were compared with the imaging data. Clinical follow up of the patients started on anti-tuberculous treatment was done to evaluate the response to treatment and clinical outcome. Results: Age range of patients in the study was between 1 year to 73 years. The mean age of presentation was 11.5 years. No significant difference in the distribution of cerebral tuberculosis was noted among the two genders. Imaging findings of neuro-tuberculosis obtained were varied and non specific ranging from lepto-meningeal enhancement, cerebritis to space occupying lesions such as tuberculomas and tubercular abscesses. Complications presenting as hydrocephalus (n= 7) and infarcts (n=9) was noted in few of these patients. 29 patients showed radiological suspicion of CNS tuberculosis with meningitis alone observed in 11 cases, tuberculomas alone were observed in 4 cases, meningitis with parenchymal tuberculomas in 11 cases. Tubercular abscess and cerebritis were observed in one case each. Tuberculous arachnoiditis was noted in one patient. Gene expert positivity was obtained in 11 out of 29 radiologically suspected patients; none of the patients showed culture positivity. Meningeal form of the disease alone showed higher positivity rate of gene Xpert (n=5) followed by combination of meningeal and parenchymal forms of disease (n=4). The parenchymal manifestation of disease alone showed least positivity rates (n= 3) with gene xpert testing. All 29 patients were started on anti tubercular treatment based on radiological suspicion of the disease with clinical improvement observed in 27 treated patients. Conclusions: In our study, higher incidence of neuro- tuberculosis was noted in paediatric population with predominance of the meningeal form of the disease. Gene Xpert positivity obtained was low due to paucibacillary nature of cerebrospinal fluid (CSF) with even lower positivity of CSF samples in parenchymal form of the manifestation. MRI showed high accuracy in detecting CNS lesions in neuro-tuberculosis. Hence, it can be concluded that MRI plays a crucial role in the diagnosis because of its inherent sensitivity and specificity and is an indispensible imaging modality. It caters to the need of early diagnosis owing to poor sensitivity of microbiological tests more so in the parenchymal manifestation of the disease.

Keywords: neurotuberculosis, tubercular abscess, tuberculoma, tuberculous meningitis

Procedia PDF Downloads 141
190 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 123
189 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers

Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala

Abstract:

The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.

Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification

Procedia PDF Downloads 137
188 Web-Based Decision Support Systems and Intelligent Decision-Making: A Systematic Analysis

Authors: Serhat Tüzün, Tufan Demirel

Abstract:

Decision Support Systems (DSS) have been investigated by researchers and technologists for more than 35 years. This paper analyses the developments in the architecture and software of these systems, provides a systematic analysis for different Web-based DSS approaches and Intelligent Decision-making Technologies (IDT), with the suggestion for future studies. Decision Support Systems literature begins with building model-oriented DSS in the late 1960s, theory developments in the 1970s, and the implementation of financial planning systems and Group DSS in the early and mid-80s. Then it documents the origins of Executive Information Systems, online analytic processing (OLAP) and Business Intelligence. The implementation of Web-based DSS occurred in the mid-1990s. With the beginning of the new millennia, intelligence is the main focus on DSS studies. Web-based technologies are having a major impact on design, development and implementation processes for all types of DSS. Web technologies are being utilized for the development of DSS tools by leading developers of decision support technologies. Major companies are encouraging its customers to port their DSS applications, such as data mining, customer relationship management (CRM) and OLAP systems, to a web-based environment. Similarly, real-time data fed from manufacturing plants are now helping floor managers make decisions regarding production adjustment to ensure that high-quality products are produced and delivered. Web-based DSS are being employed by organizations as decision aids for employees as well as customers. A common usage of Web-based DSS has been to assist customers configure product and service according to their needs. These systems allow individual customers to design their own products by choosing from a menu of attributes, components, prices and delivery options. The Intelligent Decision-making Technologies (IDT) domain is a fast growing area of research that integrates various aspects of computer science and information systems. This includes intelligent systems, intelligent technology, intelligent agents, artificial intelligence, fuzzy logic, neural networks, machine learning, knowledge discovery, computational intelligence, data science, big data analytics, inference engines, recommender systems or engines, and a variety of related disciplines. Innovative applications that emerge using IDT often have a significant impact on decision-making processes in government, industry, business, and academia in general. This is particularly pronounced in finance, accounting, healthcare, computer networks, real-time safety monitoring and crisis response systems. Similarly, IDT is commonly used in military decision-making systems, security, marketing, stock market prediction, and robotics. Even though lots of research studies have been conducted on Decision Support Systems, a systematic analysis on the subject is still missing. Because of this necessity, this paper has been prepared to search recent articles about the DSS. The literature has been deeply reviewed and by classifying previous studies according to their preferences, taxonomy for DSS has been prepared. With the aid of the taxonomic review and the recent developments over the subject, this study aims to analyze the future trends in decision support systems.

Keywords: decision support systems, intelligent decision-making, systematic analysis, taxonomic review

Procedia PDF Downloads 247
187 A 1T1R Nonvolatile Memory with Al/TiO₂/Au and Sol-Gel Processed Barium Zirconate Nickelate Gate in Pentacene Thin Film Transistor

Authors: Ke-Jing Lee, Cheng-Jung Lee, Yu-Chi Chang, Li-Wen Wang, Yeong-Her Wang

Abstract:

To avoid the cross-talk issue of only resistive random access memory (RRAM) cell, one transistor and one resistor (1T1R) architecture with a TiO₂-based RRAM cell connected with solution barium zirconate nickelate (BZN) organic thin film transistor (OTFT) device is successfully demonstrated. The OTFT were fabricated on a glass substrate. Aluminum (Al) as the gate electrode was deposited via a radio-frequency (RF) magnetron sputtering system. The barium acetate, zirconium n-propoxide, and nickel II acetylacetone were synthesized by using the sol-gel method. After the BZN solution was completely prepared using the sol-gel process, it was spin-coated onto the Al/glass substrate as the gate dielectric. The BZN layer was baked at 100 °C for 10 minutes under ambient air conditions. The pentacene thin film was thermally evaporated on the BZN layer at a deposition rate of 0.08 to 0.15 nm/s. Finally, gold (Au) electrode was deposited using an RF magnetron sputtering system and defined through shadow masks as both the source and drain. The channel length and width of the transistors were 150 and 1500 μm, respectively. As for the manufacture of 1T1R configuration, the RRAM device was fabricated directly on drain electrodes of TFT device. A simple metal/insulator/metal structure, which consisting of Al/TiO₂/Au structures, was fabricated. First, Au was deposited to be a bottom electrode of RRAM device by RF magnetron sputtering system. Then, the TiO₂ layer was deposited on Au electrode by sputtering. Finally, Al was deposited as the top electrode. The electrical performance of the BZN OTFT was studied, showing superior transfer characteristics with the low threshold voltage of −1.1 V, good saturation mobility of 5 cm²/V s, and low subthreshold swing of 400 mV/decade. The integration of the BZN OTFT and TiO₂ RRAM devices was finally completed to form 1T1R configuration with low power consumption of 1.3 μW, the low operation current of 0.5 μA, and reliable data retention. Based on the I-V characteristics, the different polarities of bipolar switching are found to be determined by the compliance current with the different distribution of the internal oxygen vacancies used in the RRAM and 1T1R devices. Also, this phenomenon can be well explained by the proposed mechanism model. It is promising to make the 1T1R possible for practical applications of low-power active matrix flat-panel displays.

Keywords: one transistor and one resistor (1T1R), organic thin-film transistor (OTFT), resistive random access memory (RRAM), sol-gel

Procedia PDF Downloads 327
186 The High Potential and the Little Use of Brazilian Class Actions for Prevention and Penalization Due to Workplace Accidents in Brazil

Authors: Sandra Regina Cavalcante, Rodolfo A. G. Vilela

Abstract:

Introduction: Work accidents and occupational diseases are a big problem for public health around the world and the main health problem of workers with high social and economic costs. Brazil has shown progress over the last years, with the development of the regulatory system to improve safety and quality of life in the workplace. However, the situation is far from acceptable, because the occurrences remain high and there is a great gap between legislation and reality, generated by the low level of voluntary compliance with the law. Brazilian laws provide procedural legal instruments for both, to compensate the damage caused to the worker's health and to prevent future injuries. In the Judiciary, the prevention idea is in the collective action, effected through Brazilian Class Actions. Inhibitory guardianships may impose both, improvements to the working environment, as well as determine the interruption of activity or a ban on the machine that put workers at risk. Both the Labor Prosecution and trade unions have to stand to promote this type of action, providing payment of compensation for collective moral damage. Objectives: To verify how class actions (known as ‘public civil actions’), regulated in Brazilian legal system to protect diffuse, collective and homogeneous rights, are being used to protect workers' health and safety. Methods: The author identified and evaluated decisions of Brazilian Superior Court of Labor involving collective actions and work accidents. The timeframe chosen was December 2015. The online jurisprudence database was consulted in page available for public consultation on the court website. The categorization of the data was made considering the result (court application was rejected or accepted), the request type, the amount of compensation and the author of the cause, besides knowing the reasoning used by the judges. Results: The High Court issued 21,948 decisions in December 2015, with 1448 judgments (6.6%) about work accidents and only 20 (0.09%) on collective action. After analyzing these 20 decisions, it was found that the judgments granted compensation for collective moral damage (85%) and/or obligation to make, that is, changes to improve prevention and safety (71%). The processes have been filed mainly by the Labor Prosecutor (83%), and also appeared lawsuits filed by unions (17%). The compensation for collective moral damage had average of 250,000 reais (about US$65,000), but it should be noted that there is a great range of values found, also are several situations repaired by this compensation. This is the last instance resource for this kind of lawsuit and all decisions were well founded and received partially the request made for working environment protection. Conclusions: When triggered, the labor court system provides the requested collective protection in class action. The values of convictions arbitrated in collective actions are significant and indicate that it creates social and economic repercussions, stimulating employers to improve the working environment conditions of their companies. It is necessary to intensify the use of collective actions, however, because they are more efficient for prevention than reparatory individual lawsuits, but it has been underutilized, mainly by Unions.

Keywords: Brazilian Class Action, collective action, work accident penalization, workplace accident prevention, workplace protection law

Procedia PDF Downloads 244
185 Computational Analysis of Thermal Degradation in Wind Turbine Spars' Equipotential Bonding Subjected to Lightning Strikes

Authors: Antonio A. M. Laudani, Igor O. Golosnoy, Ole T. Thomsen

Abstract:

Rotor blades of large, modern wind turbines are highly susceptible to downward lightning strikes, as well as to triggering upward lightning; consequently, it is necessary to equip them with an effective lightning protection system (LPS) in order to avoid any damage. The performance of existing LPSs is affected by carbon fibre reinforced polymer (CFRP) structures, which lead to lightning-induced damage in the blades, e.g. via electrical sparks. A solution to prevent internal arcing would be to electrically bond the LPS and the composite structures such that to obtain the same electric potential. Nevertheless, elevated temperatures are achieved at the joint interfaces because of high contact resistance, which melts and vaporises some of the epoxy resin matrix around the bonding. The produced high-pressure gasses open up the bonding and can ignite thermal sparks. The objective of this paper is to predict the current density distribution and the temperature field in the adhesive joint cross-section, in order to check whether the resin pyrolysis temperature is achieved and any damage is expected. The finite element method has been employed to solve both the current and heat transfer problems, which are considered weakly coupled. The mathematical model for electric current includes Maxwell-Ampere equation for induced electric field solved together with current conservation, while the thermal field is found from heat diffusion equation. In this way, the current sub-model calculates Joule heat release for a chosen bonding configuration, whereas the thermal analysis allows to determining threshold values of voltage and current density not to be exceeded in order to maintain the temperature across the joint below the pyrolysis temperature, therefore preventing the occurrence of outgassing. In addition, it provides an indication of the minimal number of bonding points. It is worth to mention that the numerical procedures presented in this study can be tailored and applied to any type of joints other than adhesive ones for wind turbine blades. For instance, they can be applied for lightning protection of aerospace bolted joints. Furthermore, they can even be customized to predict the electromagnetic response under lightning strikes of other wind turbine systems, such as nacelle and hub components.

Keywords: carbon fibre reinforced polymer, equipotential bonding, finite element method, FEM, lightning protection system, LPS, wind turbine blades

Procedia PDF Downloads 135
184 Heat Transfer Modeling of 'Carabao' Mango (Mangifera indica L.) during Postharvest Hot Water Treatments

Authors: Hazel James P. Agngarayngay, Arnold R. Elepaño

Abstract:

Mango is the third most important export fruit in the Philippines. Despite the expanding mango trade in world market, problems on postharvest losses caused by pests and diseases are still prevalent. Many disease control and pest disinfestation methods have been studied and adopted. Heat treatment is necessary to eliminate pests and diseases to be able to pass the quarantine requirements of importing countries. During heat treatments, temperature and time are critical because fruits can easily be damaged by over-exposure to heat. Modeling the process enables researchers and engineers to study the behaviour of temperature distribution within the fruit over time. Understanding physical processes through modeling and simulation also saves time and resources because of reduced experimentation. This research aimed to simulate the heat transfer mechanism and predict the temperature distribution in ‘Carabao' mangoes during hot water treatment (HWT) and extended hot water treatment (EHWT). The simulation was performed in ANSYS CFD Software, using ANSYS CFX Solver. The simulation process involved model creation, mesh generation, defining the physics of the model, solving the problem, and visualizing the results. Boundary conditions consisted of the convective heat transfer coefficient and a constant free stream temperature. The three-dimensional energy equation for transient conditions was numerically solved to obtain heat flux and transient temperature values. The solver utilized finite volume method of discretization. To validate the simulation, actual data were obtained through experiment. The goodness of fit was evaluated using mean temperature difference (MTD). Also, t-test was used to detect significant differences between the data sets. Results showed that the simulations were able to estimate temperatures accurately with MTD of 0.50 and 0.69 °C for the HWT and EHWT, respectively. This indicates good agreement between the simulated and actual temperature values. The data included in the analysis were taken at different locations of probe punctures within the fruit. Moreover, t-tests showed no significant differences between the two data sets. Maximum heat fluxes obtained at the beginning of the treatments were 394.15 and 262.77 J.s-1 for HWT and EHWT, respectively. These values decreased abruptly at the first 10 seconds and gradual decrease was observed thereafter. Data on heat flux is necessary in the design of heaters. If underestimated, the heating component of a certain machine will not be able to provide enough heat required by certain operations. Otherwise, over-estimation will result in wasting of energy and resources. This study demonstrated that the simulation was able to estimate temperatures accurately. Thus, it can be used to evaluate the influence of various treatment conditions on the temperature-time history in mangoes. When combined with information on insect mortality and quality degradation kinetics, it could predict the efficacy of a particular treatment and guide appropriate selection of treatment conditions. The effect of various parameters on heat transfer rates, such as the boundary and initial conditions as well as the thermal properties of the material, can be systematically studied without performing experiments. Furthermore, the use of ANSYS software in modeling and simulation can be explored in modeling various systems and processes.

Keywords: heat transfer, heat treatment, mango, modeling and simulation

Procedia PDF Downloads 226
183 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples

Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges

Abstract:

Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.

Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review

Procedia PDF Downloads 157