Search results for: transverse flux PM linear machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6821

Search results for: transverse flux PM linear machine

581 Prioritizing Ecosystem Services for South-Central Regions of Chile: An Expert-Based Spatial Multi-Criteria Approach

Authors: Yenisleidy Martinez Martinez, Yannay Casas-Ledon, Jo Dewulf

Abstract:

The ecosystem services (ES) concept has contributed to draw attention to the benefits ecosystems generate for people and how necessary natural resources are for human well-being. The identification and prioritization of the ES constitute the first steps to undertake conservation and valuation initiatives on behalf of people. Additionally, mapping the supply of ES is a powerful tool to support decision making regarding the sustainable management of landscape and natural resources. In this context, the present study aimed to identify, prioritize and map the primary ES in Biobio and Nuble regions using a methodology that combines expert judgment, multi-attribute evaluation methods, and Geographic Information Systems (GIS). Firstly, scores about the capacity of different land use/cover types to supply ES and the importance attributed to each service were obtained from experts and stakeholders via an online survey. Afterward, the ES assessment matrix was constructed, and the weighted linear combination (WLC) method was applied to mapping the overall capacity of supply of provisioning, regulating and maintenance, and cultural services. Finally, prioritized ES for the study area were selected and mapped. The results suggest that native forests, wetlands, and water bodies have the highest supply capacities of ES, while urban and industrial areas and bare areas have a very low supply of services. On the other hand, fourteen out of twenty-nine services were selected by experts and stakeholders as the most relevant for the regions. The spatial distribution of ES has shown that the Andean Range and part of the Coastal Range have the highest ES supply capacity, mostly regulation and maintenance and cultural ES. This performance is related to the presence of native forests, water bodies, and wetlands in those zones. This study provides specific information about the most relevant ES in Biobio and Nuble according to the opinion of local stakeholders and the spatial identification of areas with a high capacity to provide services. These findings could be helpful as a reference by planners and policymakers to develop landscape management strategies oriented to preserve the supply of services in both regions.

Keywords: ecosystem services, expert judgment, mapping, multi-criteria decision making, prioritization

Procedia PDF Downloads 126
580 Short Association Bundle Atlas for Lateralization Studies from dMRI Data

Authors: C. Román, M. Guevara, P. Salas, D. Duclap, J. Houenou, C. Poupon, J. F. Mangin, P. Guevara

Abstract:

Diffusion Magnetic Resonance Imaging (dMRI) allows the non-invasive study of human brain white matter. From diffusion data, it is possible to reconstruct fiber trajectories using tractography algorithms. Our previous work consists in an automatic method for the identification of short association bundles of the superficial white matter (SWM), based on a whole brain inter-subject hierarchical clustering applied to a HARDI database. The method finds representative clusters of similar fibers, belonging to a group of subjects, according to a distance measure between fibers, using a non-linear registration (DTI-TK). The algorithm performs an automatic labeling based on the anatomy, defined by a cortex mesh parcelated with FreeSurfer software. The clustering was applied to two independent groups of 37 subjects. The clusters resulting from both groups were compared using a restrictive threshold of mean distance between each pair of bundles from different groups, in order to keep reproducible connections. In the left hemisphere, 48 reproducible bundles were found, while 43 bundles where found in the right hemisphere. An inter-hemispheric bundle correspondence was then applied. The symmetric horizontal reflection of the right bundles was calculated, in order to obtain the position of them in the left hemisphere. Next, the intersection between similar bundles was calculated. The pairs of bundles with a fiber intersection percentage higher than 50% were considered similar. The similar bundles between both hemispheres were fused and symmetrized. We obtained 30 common bundles between hemispheres. An atlas was created with the resulting bundles and used to segment 78 new subjects from another HARDI database, using a distance threshold between 6-8 mm according to the bundle length. Finally, a laterality index was calculated based on the bundle volume. Seven bundles of the atlas presented right laterality (IP_SP_1i, LO_LO_1i, Op_Tr_0i, PoC_PoC_0i, PoC_PreC_2i, PreC_SM_0i, y RoMF_RoMF_0i) and one presented left laterality (IP_SP_2i), there is no tendency of lateralization according to the brain region. Many factors can affect the results, like tractography artifacts, subject registration, and bundle segmentation. Further studies are necessary in order to establish the influence of these factors and evaluate SWM laterality.

Keywords: dMRI, hierarchical clustering, lateralization index, tractography

Procedia PDF Downloads 331
579 Design, Numerical Simulation, Fabrication and Physical Experimentation of the Tesla’s Cohesion Type Bladeless Turbine

Authors: M.Sivaramakrishnaiah, D. S .Nasan, P. V. Subhanjeneyulu, J. A. Sandeep Kumar, N. Sreenivasulu, B. V. Amarnath Reddy, B. Veeralingam

Abstract:

Design, numerical simulation, fabrication, and physical experimentation of the Tesla’s Bladeless centripetal turbine for generating electrical power are presented in this research paper. 29 Pressurized air combined with water via a nozzle system is made to pass tangentially through a set of parallel smooth discs surfaces, which impart rotational motion to the discs fastened common shaft for the power generation. The power generated depends upon the fluid speed parameter leaving the nozzle inlet. Physically due to laminar boundary layer phenomena at smooth disc surface, the high speed fluid layers away from the plate moving against the low speed fluid layers nearer to the plate develop a tangential drag from the viscous shear forces. This compels the nearer layers to drag along with the high layers causing the disc to spin. Solid Works design software and fluid mechanics and machine elements design theories was used to compute mechanical design specifications of turbine parts like 48 mm diameter discs, common shaft, central exhaust, plenum chamber, swappable nozzle inlets, etc. Also, ANSYS CFX 2018 was used for the numerical 2 simulation of the physical phenomena encountered in the turbine working. When various numerical simulation and physical experimental results were verified, there is good agreement between them 6, both quantitatively and qualitatively. The sources of input and size of the blades may affect the power generated and turbine efficiency, respectively. The results may change if there is a change in the fluid flowing between the discs. The inlet fluid pressure versus turbine efficiency and the number of discs versus turbine power studies based on both results were carried out to develop the 8 relationships between the inlet and outlet parameters of the turbine. The present research work obtained the turbine efficiency in the range of 7-10%, and for this range; the electrical power output generated was 50-60 W.

Keywords: tesla turbine, cohesion type bladeless turbine, boundary layer theory, cohesion type bladeless turbine, tangential fluid flow, viscous and adhesive forces, plenum chamber, pico hydro systems

Procedia PDF Downloads 87
578 Computationally Efficient Electrochemical-Thermal Li-Ion Cell Model for Battery Management System

Authors: Sangwoo Han, Saeed Khaleghi Rahimian, Ying Liu

Abstract:

Vehicle electrification is gaining momentum, and many car manufacturers promise to deliver more electric vehicle (EV) models to consumers in the coming years. In controlling the battery pack, the battery management system (BMS) must maintain optimal battery performance while ensuring the safety of a battery pack. Tasks related to battery performance include determining state-of-charge (SOC), state-of-power (SOP), state-of-health (SOH), cell balancing, and battery charging. Safety related functions include making sure cells operate within specified, static and dynamic voltage window and temperature range, derating power, detecting faulty cells, and warning the user if necessary. The BMS often utilizes an RC circuit model to model a Li-ion cell because of its robustness and low computation cost among other benefits. Because an equivalent circuit model such as the RC model is not a physics-based model, it can never be a prognostic model to predict battery state-of-health and avoid any safety risk even before it occurs. A physics-based Li-ion cell model, on the other hand, is more capable at the expense of computation cost. To avoid the high computation cost associated with a full-order model, many researchers have demonstrated the use of a single particle model (SPM) for BMS applications. One drawback associated with the single particle modeling approach is that it forces to use the average current density in the calculation. The SPM would be appropriate for simulating drive cycles where there is insufficient time to develop a significant current distribution within an electrode. However, under a continuous or high-pulse electrical load, the model may fail to predict cell voltage or Li⁺ plating potential. To overcome this issue, a multi-particle reduced-order model is proposed here. The use of multiple particles combined with either linear or nonlinear charge-transfer reaction kinetics enables to capture current density distribution within an electrode under any type of electrical load. To maintain computational complexity like that of an SPM, governing equations are solved sequentially to minimize iterative solving processes. Furthermore, the model is validated against a full-order model implemented in COMSOL Multiphysics.

Keywords: battery management system, physics-based li-ion cell model, reduced-order model, single-particle and multi-particle model

Procedia PDF Downloads 111
577 A Conceptual Model of the 'Driver – Highly Automated Vehicle' System

Authors: V. A. Dubovsky, V. V. Savchenko, A. A. Baryskevich

Abstract:

The current trend in the automotive industry towards automatic vehicles is creating new challenges related to human factors. This occurs due to the fact that the driver is increasingly relieved of the need to be constantly involved in driving the vehicle, which can negatively impact his/her situation awareness when manual control is required, and decrease driving skills and abilities. These new problems need to be studied in order to provide road safety during the transition towards self-driving vehicles. For this purpose, it is important to develop an appropriate conceptual model of the interaction between the driver and the automated vehicle, which could serve as a theoretical basis for the development of mathematical and simulation models to explore different aspects of driver behaviour in different road situations. Well-known driver behaviour models describe the impact of different stages of the driver's cognitive process on driving performance but do not describe how the driver controls and adjusts his actions. A more complete description of the driver's cognitive process, including the evaluation of the results of his/her actions, will make it possible to more accurately model various aspects of the human factor in different road situations. This paper presents a conceptual model of the 'driver – highly automated vehicle' system based on the P.K. Anokhin's theory of functional systems, which is a theoretical framework for describing internal processes in purposeful living systems based on such notions as goal, desired and actual results of the purposeful activity. A central feature of the proposed model is a dynamic coupling mechanism between the decision-making of a driver to perform a particular action and changes of road conditions due to driver’s actions. This mechanism is based on the stage by stage evaluation of the deviations of the actual values of the driver’s action results parameters from the expected values. The overall functional structure of the highly automated vehicle in the proposed model includes a driver/vehicle/environment state analyzer to coordinate the interaction between driver and vehicle. The proposed conceptual model can be used as a framework to investigate different aspects of human factors in transitions between automated and manual driving for future improvements in driving safety, and for understanding how driver-vehicle interface must be designed for comfort and safety. A major finding of this study is the demonstration that the theory of functional systems is promising and has the potential to describe the interaction of the driver with the vehicle and the environment.

Keywords: automated vehicle, driver behavior, human factors, human-machine system

Procedia PDF Downloads 145
576 The Quantum Theory of Music and Human Languages

Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: language, music, sciences, quantum entenglement

Procedia PDF Downloads 77
575 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Mixed Integration Method: Stability Aspects and Computational Efficiency

Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino

Abstract:

In order to reduce numerical computations in the nonlinear dynamic analysis of seismically base-isolated structures, a Mixed Explicit-Implicit time integration Method (MEIM) has been proposed. Adopting the explicit conditionally stable central difference method to compute the nonlinear response of the base isolation system, and the implicit unconditionally stable Newmark’s constant average acceleration method to determine the superstructure linear response, the proposed MEIM, which is conditionally stable due to the use of the central difference method, allows to avoid the iterative procedure generally required by conventional monolithic solution approaches within each time step of the analysis. The main aim of this paper is to investigate the stability and computational efficiency of the MEIM when employed to perform the nonlinear time history analysis of base-isolated structures with sliding bearings. Indeed, in this case, the critical time step could become smaller than the one used to define accurately the earthquake excitation due to the very high initial stiffness values of such devices. The numerical results obtained from nonlinear dynamic analyses of a base-isolated structure with a friction pendulum bearing system, performed by using the proposed MEIM, are compared to those obtained adopting a conventional monolithic solution approach, i.e. the implicit unconditionally stable Newmark’s constant acceleration method employed in conjunction with the iterative pseudo-force procedure. According to the numerical results, in the presented numerical application, the MEIM does not have stability problems being the critical time step larger than the ground acceleration one despite of the high initial stiffness of the friction pendulum bearings. In addition, compared to the conventional monolithic solution approach, the proposed algorithm preserves its computational efficiency even when it is adopted to perform the nonlinear dynamic analysis using a smaller time step.

Keywords: base isolation, computational efficiency, mixed explicit-implicit method, partitioned solution approach, stability

Procedia PDF Downloads 278
574 N-Glycosylation in the Green Microalgae Chlamydomonas reinhardtii

Authors: Pierre-Louis Lucas, Corinne Loutelier-Bourhis, Narimane Mati-Baouche, Philippe Chan Tchi-Song, Patrice Lerouge, Elodie Mathieu-Rivet, Muriel Bardor

Abstract:

N-glycosylation is a post-translational modification taking place in the Endoplasmic Reticulum and the Golgi apparatus where defined glycan features are added on protein in a very specific sequence Asn-X-Thr/Ser/Cys were X can be any amino acid except proline. Because it is well-established that those N-glycans play a critical role in protein biological activity, protein half-life and that a different N-glycan structure may induce an immune response, they are very important in Biopharmaceuticals which are mainly glycoproteins bearing N-glycans. From now, most of the biopharmaceuticals are produced by mammalian cells like Chinese Hamster Ovary cells (CHO) for their N-glycosylation similar to the human, but due to the high production costs, several other species are investigated as the possible alternative system. In this purpose, the green microalgae Chlamydomonas reinhardtii was investigated as the potential production system for Biopharmaceuticals. This choice was influenced by the facts that C. reinhardtii is a well-study microalgae which is growing fast with a lot of molecular biology tools available. This organism is also producing N-glycan on its endogenous proteins. However, the analysis of the N-glycan structure of this microalgae has revealed some differences as compared to the human. Rather than in Human where the glycans are processed by key enzymes called N-acetylglucosaminyltransferase I and II (GnTI and GnTII) adding GlcNAc residue to form a GlcNAc₂Man₃GlcNAc₂ core N-glycan, C. reinhardtii lacks those two enzymes and possess a GnTI independent glycosylation pathway. Moreover, some enzymes like xylosyltransferases and methyltransferases not present in human are supposed to act on the glycans of C. reinhardtii. Furthermore, the recent structural study by mass spectrometry shows that the N-glycosylation precursor supposed to be conserved in almost all eukaryotic cells results in a linear Man₅GlcNAc₂ rather than a branched one in C. reinhardtii. In this work, we will discuss the new released MS information upon C. reinhardtii N-glycan structure and their impact on our attempt to modify the glycan in a Human manner. Two strategies will be discussed. The first one consisted in the study of Xylosyltransferase insertional mutants from the CLIP library in order to remove xyloses from the N-glycans. The second will go further in the humanization by transforming the microalgae with the exogenous gene from Toxoplasma gondii having an activity similar to GnTI and GnTII with the aim to synthesize GlcNAc₂Man₃GlcNAc₂ in C. reinhardtii.

Keywords: Chlamydomonas reinhardtii, N-glycosylation, glycosyltransferase, mass spectrometry, humanization

Procedia PDF Downloads 177
573 Satellite Multispectral Remote Sensing of Ozone Pollution

Authors: Juan Cuesta

Abstract:

Satellite observation is a fundamental component of air pollution monitoring systems, such as the large-scale Copernicus Programme. Next-generation satellite sensors, in orbit or programmed in the future, offer great potential to observe major air pollutants, such as tropospheric ozone, with unprecedented spatial and temporal coverage. However, satellite approaches developed for remote sensing of tropospheric ozone are based solely on measurements from a single instrument in a specific spectral range, either thermal infrared or ultraviolet. These methods offer sensitivity to tropospheric ozone located at the lowest at 3 or 4 km altitude above the surface, thus limiting their applications for ozone pollution analysis. Indeed, no current observation of a single spectral domain provides enough information to accurately measure ozone in the atmospheric boundary layer. To overcome this limitation, we have developed a multispectral synergism approach, called "IASI+GOME2", at the Laboratoire Interuniversitaire des Systèmes Atmosphériques (LISA) laboratory. This method is based on the synergy of thermal infrared and ultraviolet observations of respectively the Infrared Atmospheric Sounding Interferometer (IASI) and the Global Ozone Monitoring Experiment-2 (GOME-2) sensors embedded in MetOp satellites that have been in orbit since 2007. IASI+GOME2 allowed the first satellite observation of ozone plumes located between the surface and 3 km of altitude (what we call the lowermost troposphere), as it offers significant sensitivity in this layer. This represents a major advance for the observation of ozone in the lowermost troposphere and its application to air quality analysis. The ozone abundance derived by IASI+GOME2 shows a good agreement with respect to independent observations of ozone based on ozone sondes (a low mean bias, a linear correlation larger than 0.8 and a mean precision of about 16 %) around the world during all seasons. Using IASI+GOME2, lowermost tropospheric ozone pollution plumes are quantified both in terms of concentrations and also in the amounts of ozone photo-chemically produced along transport and also enabling the characterization of the ozone pollution, such as what occurred during the lockdowns linked to the COVID-19 pandemic. The current paper will show the IASI+GOME2 multispectral approach to observe the lowermost tropospheric ozone from space and an overview of several applications on different continents and at a global scale.

Keywords: ozone pollution, multispectral synergism, satellite, air quality

Procedia PDF Downloads 81
572 Instructors Willingness, Self-Efficacy Beliefs, Attitudes and Knowledge about Provisions of Instructional Accommodations for Students with Disabilities: The Case Selected Universities in Ethiopia

Authors: Abdreheman Seid Abdella

Abstract:

This study examined instructors willingness, self-efficacy beliefs, attitudes and knowledge about provisions of instructional accommodations for students with disabilities in universities. Major concepts used in this study operationally defined and some models of disability were reviewed. Questionnaires were distributed to a total of 181 instructors from four universities and quantitative data was generated. Then to analyze the data, appropriate methods of data analysis were employed. The result indicated that on average instructors had positive willingness, strong self-efficacy beliefs and positive attitudes towards providing instructional accommodations. In addition, the result showed that the majority of participants had moderate level of knowledge about provision of instructional accommodations. Concerning the relationship between instructors background variables and dependent variables, the result revealed that location of university and awareness raising training about Inclusive Education showed statistically significant relationship with all dependent variables (willingness, self-efficacy beliefs, attitudes and knowledge). On the other hand, gender and college/faculty did not show a statistically significant relationship. In addition, it was found that among the inter-correlation of dependent variables, the correlation between attitudes and willingness to provide accommodations was the strongest. Furthermore, using multiple linear regression analysis, this study also indicated that predictor variables like self-efficacy beliefs, attitudes, knowledge and teaching methodology training made statistically significant contribution to predicting the criterion willingness. Predictor variables like willingness and attitudes made statistically significant contribution to predicting self-efficacy beliefs. Predictor variables like willingness, Special Needs Education course and self-efficacy beliefs made statistically significant contribution to predict attitudes. Predictor variables like Special Needs Education courses, the location of university and willingness made statistically significant contribution to predicting knowledge. Finally, using exploratory factor analysis, this study showed that there were four components or factors each that represent the underlying constructs of willingness and self-efficacy beliefs to provide instructional accommodations items, five components for attitudes towards providing accommodations items and three components represent the underlying constructs for knowledge about provisions of instructional accommodations items. Based on the findings, recommendations were made for improving the situation of instructional accommodations in Ethiopian universities.

Keywords: willingness, self-efficacy belief, attitude, knowledge

Procedia PDF Downloads 270
571 Modeling and Optimizing of Sinker Electric Discharge Machine Process Parameters on AISI 4140 Alloy Steel by Central Composite Rotatable Design Method

Authors: J. Satya Eswari, J. Sekhar Babub, Meena Murmu, Govardhan Bhat

Abstract:

Electrical Discharge Machining (EDM) is an unconventional manufacturing process based on removal of material from a part by means of a series of repeated electrical sparks created by electric pulse generators at short intervals between a electrode tool and the part to be machined emmersed in dielectric fluid. In this paper, a study will be performed on the influence of the factors of peak current, pulse on time, interval time and power supply voltage. The output responses measured were material removal rate (MRR) and surface roughness. Finally, the parameters were optimized for maximum MRR with the desired surface roughness. RSM involves establishing mathematical relations between the design variables and the resulting responses and optimizing the process conditions. RSM is not free from problems when it is applied to multi-factor and multi-response situations. Design of experiments (DOE) technique to select the optimum machining conditions for machining AISI 4140 using EDM. The purpose of this paper is to determine the optimal factors of the electro-discharge machining (EDM) process investigate feasibility of design of experiment techniques. The work pieces used were rectangular plates of AISI 4140 grade steel alloy. The study of optimized settings of key machining factors like pulse on time, gap voltage, flushing pressure, input current and duty cycle on the material removal, surface roughness is been carried out using central composite design. The objective is to maximize the Material removal rate (MRR). Central composite design data is used to develop second order polynomial models with interaction terms. The insignificant coefficients’ are eliminated with these models by using student t test and F test for the goodness of fit. CCD is first used to establish the determine the optimal factors of the electro-discharge machining (EDM) for maximizing the MRR. The responses are further treated through a objective function to establish the same set of key machining factors to satisfy the optimization problem of the electro-discharge machining (EDM) process. The results demonstrate the better performance of CCD data based RSM for optimizing the electro-discharge machining (EDM) process.

Keywords: electric discharge machining (EDM), modeling, optimization, CCRD

Procedia PDF Downloads 341
570 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting

Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero

Abstract:

In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling‎) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.

Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling

Procedia PDF Downloads 135
569 Artificial Intelligence: Obstacles Patterns and Implications

Authors: Placide Poba-Nzaou, Anicet Tchibozo, Malatsi Galani, Ali Etkkali, Erwin Halim

Abstract:

Artificial intelligence (AI) is a general-purpose technology that is transforming many industries, working life and society by stimulating economic growth and innovation. Despite the huge potential of benefits to be generated, the adoption of AI varies from one organization to another, from one region to another, and from one industry to another, due in part to obstacles that can inhibit an organization or organizations located in a specific geographic region or operating in a specific industry from adopting AI technology. In this context, these obstacles and their implications for AI adoption from the perspective of configurational theory is important for at least three reasons: (1) understanding these obstacles is the first step in enabling policymakers and providers to make an informed decision in stimulating AI adoption (2) most studies have investigating obstacles or challenges of AI adoption in isolation with linear assumptions while configurational theory offers a holistic and multifaceted way of investigating the intricate interactions between perceived obstacles and barriers helping to assess their synergetic combination while holding assumptions of non-linearity leading to insights that would otherwise be out of the scope of studies investigating these obstacles in isolation. This study aims to pursue two objectives: (1) characterize organizations by uncovering the typical profiles of combinations of 15 internal and external obstacles that may prevent organizations from adopting AI technology, (2) assess the variation in terms of intensity of AI adoption associated with each configuration. We used data from a survey of AI adoption by organizations conducted throughout the EU27, Norway, Iceland and the UK (N=7549). Cluster analysis and discriminant analysis help uncover configurations of organizations based on the 15 obstacles, including eight external and seven internal. Second, we compared the clusters according to AI adoption intensity using an analysis of variance (ANOVA) and a Tamhane T2 post hoc test. The study uncovers three strongly separated clusters of organizations based on perceived obstacles to AI adoption. The clusters are labeled according to their magnitude of perceived obstacles to AI adoption: (1) Cluster I – High Level of perceived obstacles (N = 2449, 32.4%)(2) Cluster II – Low Level of perceived obstacles (N =1879, 24.9%) (3) Cluster III – Moderate Level of perceived obstacles (N =3221, 42.7%). The proposed taxonomy goes beyond the normative understanding of perceived obstacles to AI adoption and associated implications: it provides a well-structured and parsimonious lens that is useful for policymakers, AI technology providers, and researchers. Surprisingly, the ANOVAs revealed a “high level of perceived obstacles” cluster associated with a significantly high intensity of AI adoption.

Keywords: Artificial intelligence (AI), obstacles, adoption, taxonomy.

Procedia PDF Downloads 106
568 The Influence of Cognitive Load in the Acquisition of Words through Sentence or Essay Writing

Authors: Breno Barrreto Silva, Agnieszka Otwinowska, Katarzyna Kutylowska

Abstract:

Research comparing lexical learning following the writing of sentences and longer texts with keywords is limited and contradictory. One possibility is that the recursivity of writing may enhance processing and increase lexical learning; another possibility is that the higher cognitive load of complex-text writing (e.g., essays), at least when timed, may hinder the learning of words. In our study, we selected 2 sets of 10 academic keywords matched for part of speech, length (number of characters), frequency (SUBTLEXus), and concreteness, and we asked 90 L1-Polish advanced-level English majors to use the keywords when writing sentences, timed (60 minutes) or untimed essays. First, all participants wrote a timed Control essay (60 minutes) without keywords. Then different groups produced Timed essays (60 minutes; n=33), Untimed essays (n=24), or Sentences (n=33) using the two sets of glossed keywords (counterbalanced). The comparability of the participants in the three groups was ensured by matching them for proficiency in English (LexTALE), and for few measures derived from the control essay: VocD (assessing productive lexical diversity), normed errors (assessing productive accuracy), words per minute (assessing productive written fluency), and holistic scores (assessing overall quality of production). We measured lexical learning (depth and breadth) via an adapted Vocabulary Knowledge Scale (VKS) and a free association test. Cognitive load was measured in the three essays (Control, Timed, Untimed) using normed number of errors and holistic scores (TOEFL criteria). The number of errors and essay scores were obtained from two raters (interrater reliability Pearson’s r=.78-91). Generalized linear mixed models showed no difference in the breadth and depth of keyword knowledge after writing Sentences, Timed essays, and Untimed essays. The task-based measurements found that Control and Timed essays had similar holistic scores, but that Untimed essay had better quality than Timed essay. Also, Untimed essay was the most accurate, and Timed essay the most error prone. Concluding, using keywords in Timed, but not Untimed, essays increased cognitive load, leading to more errors and lower quality. Still, writing sentences and essays yielded similar lexical learning, and differences in the cognitive load between Timed and Untimed essays did not affect lexical acquisition.

Keywords: learning academic words, writing essays, cognitive load, english as an L2

Procedia PDF Downloads 73
567 Assessment of Biochemical Marker Profiles and Their Impact on Morbidity and Mortality of COVID-19 Patients in Tigray, Ethiopia

Authors: Teklay Gebrecherkos, Mahmud Abdulkadir

Abstract:

Abstract: The emergence and subsequent rapid worldwide spread of the COVID-19 pandemic have posed a global crisis, with a tremendously increasing burden of infection, morbidity, and mortality risks. Recent studies have suggested that severe cases of COVID-19 are characterized by massive biochemical, hematological, and inflammatory alterations whose synergistic effect is estimated to progress to multiple organ damage and failure. In this regard, biochemical monitoring of COVID-19 patients, based on comprehensive laboratory assessments and findings, is expected to play a crucial role in effective clinical management and improving the survival rates of patients. However, biochemical markers that can be informative of COVID-19 patient risk stratification and predictor of clinical outcomes are currently scarcely available. The study aims to investigate the profiles of common biochemical markers and their influence on the severity of the COVID-19 infection in Tigray, Ethiopia. Methods: A laboratory-based cross-sectional study was conducted from July to August 2020 at Quiha College of Engineering, Mekelle University COVID-19 isolation and treatment center. Sociodemographic and clinical data were collected using a structured questionnaire. Whole blood was collected from each study participant, and serum samples were separated after being delivered to the laboratory. Hematological biomarkers were analyzed using FACS count, while organ tests and serum electrolytes were analyzed using ion-selective electrode methods using a Cobas-6000 series machine. Data was analyzed using SPSS Vs 20. Results: A total of 120 SARS-CoV-2 patients were enrolled during the study. The participants ranged between 18 and 91 years, with a mean age of 52 (±108.8). The majority (40%) of participants were between the ages of 60 and above. Patients with multiple comorbidities developed severe COVID-19, though not statistically significant (p=0.34). Mann-Whitney U test analysis showed that biochemical tests such as neuropile count (p=0.003), AST levels (p=0.050), serum creatinine (p=0.000), and serum sodium (p=0.015) were significantly correlated with severe COVID-19 disease as compared to non-severe disease. Conclusion: The severity of COVID-19 was associated with higher age, organ tests AST and creatinine, serum Na+, and elevated total neutrophile count. Thus, further study needs to be conducted to evaluate the alterations of biochemical biomarkers and their impact on COVID-19.

Keywords: COVID-19, biomarkers, mortality, Tigray, Ethiopia

Procedia PDF Downloads 39
566 Social Participation and Associated Life Satisfaction among Older Adults in India: Moderating Role of Marital Status and Living Arrangements

Authors: Varsha Pandurang Nagargoje, K. S. James

Abstract:

Background: Social participation is considered as one of the central components of successful and healthy aging. This study aimed to examine the moderating role of marital status and living arrangement in the relationship between social participation and life satisfaction and other potential factors associated with life satisfaction of Indian older adults. Method: For analyses, the nationally representative study sample of 31,464 adults aged ≥60 years old was extracted from the Longitudinal Ageing Study in India (LASI) wave 1, 2017-18. Descriptive statistics and bivariate analysis have been performed to determine the proportion of life satisfaction. The first set of multivariable linear regression analyses examined Diener’s Satisfaction with Life Scale and its association with various predictor variables, including social participation, marital status, living arrangements, socio-demographic, economic, and health-related variables. Further, the second and third sets of regression investigated the moderating role of marital status and living arrangements respectively in the association of social participation and level of life satisfaction among Indian older adults. Results: Overall, the proportion of life satisfaction among older men was relatively higher than women counterparts in most background characteristics. Regression results stressed the importance of older adults’ involvement in social participation [β = 0.39, p < 0.05], being in marital union [β = 0.68, p < 0.001] and co-residential living arrangements either only with spouse [β = 1.73, p < 0.001] or with other family members [β = 2.18, p < 0.001] for the improvement of life satisfaction. Results also showed that some factors were significant for life satisfaction: in particular, increased age, having a higher level of educational status, MPCE quintile, and caste category. Higher risk of life dissatisfaction found among Indian older adults who were exposed to vulnerabilities like consuming tobacco, poor self-rated health, having difficulty in performing ADL and IADL were of major concern. The interaction effect of social participation with marital status or with living arrangements explained that currently married older individuals, and those older adults who were either co-residing with their spouse only or with other family members irrespective of their involvement in social participation remained an important modifiable factor for life satisfaction. Conclusion: It would be crucial for policymakers and practitioners to advocate social policy programs and service delivery oriented towards meaningful social connections, especially for those Indian older adults who were staying alone or currently not in the marital union to enhance their overall life satisfaction.

Keywords: Indian, older adults, social participation, life satisfaction, marital status, living arrangement

Procedia PDF Downloads 129
565 Prevalence of Breast Cancer Molecular Subtypes at a Tertiary Cancer Institute

Authors: Nahush Modak, Meena Pangarkar, Anand Pathak, Ankita Tamhane

Abstract:

Background: Breast cancer is the prominent cause of cancer and mortality among women. This study was done to show the statistical analysis of a cohort of over 250 patients detected with breast cancer diagnosed by oncologists using Immunohistochemistry (IHC). IHC was performed by using ER; PR; HER2; Ki-67 antibodies. Materials and methods: Formalin fixed Paraffin embedded tissue samples were obtained by surgical manner and standard protocol was followed for fixation, grossing, tissue processing, embedding, cutting and IHC. The Ventana Benchmark XT machine was used for automated IHC of the samples. Antibodies used were supplied by F. Hoffmann-La Roche Ltd. Statistical analysis was performed by using SPSS for windows. Statistical tests performed were chi-squared test and Correlation tests with p<.01. The raw data was collected and provided by National Cancer Insitute, Jamtha, India. Result: Luminal B was the most prevailing molecular subtype of Breast cancer at our institute. Chi squared test of homogeneity was performed to find equality in distribution and Luminal B was the most prevalent molecular subtype. The worse prognostic indicator for breast cancer depends upon expression of Ki-67 and her2 protein in cancerous cells. Our study was done at p <.01 and significant dependence was observed. There exists no dependence of age on molecular subtype of breast cancer. Similarly, age is an independent variable while considering Ki-67 expression. Chi square test performed on Human epidermal growth factor receptor 2 (HER2) statuses of patients and strong dependence was observed in percentage of Ki-67 expression and Her2 (+/-) character which shows that, value of Ki depends upon Her2 expression in cancerous cells (p<.01). Surprisingly, dependence was observed in case of Ki-67 and Pr, at p <.01. This shows that Progesterone receptor proteins (PR) are over-expressed when there is an elevation in expression of Ki-67 protein. Conclusion: We conclude from that Luminal B is the most prevalent molecular subtype at National Cancer Institute, Jamtha, India. There was found no significant correlation between age and Ki-67 expression in any molecular subtype. And no dependence or correlation exists between patients’ age and molecular subtype. We also found that, when the diagnosis is Luminal A, out of the cohort of 257 patients, no patient shows >14% Ki-67 value. Statistically, extremely significant values were observed for dependence of PR+Her2- and PR-Her2+ scores on Ki-67 expression. (p<.01). Her2 is an important prognostic factor in breast cancer. Chi squared test for Her2 and Ki-67 shows that the expression of Ki depends upon Her2 statuses. Moreover, Ki-67 cannot be used as a standalone prognostic factor for determining breast cancer.

Keywords: breast cancer molecular subtypes , correlation, immunohistochemistry, Ki-67 and HR, statistical analysis

Procedia PDF Downloads 123
564 Integrated Human Resources and Work Environment Management System

Authors: Loreta Kaklauskiene, Arturas Kaklauskas

Abstract:

The Integrated Human Resources and Work Environment Management (HOWE) System optimises employee productivity, improves the work environment, and, at the same time, meets the employer’s strategic goals. The HOWE system has been designed to ensure an organisation can successfully compete in the global market, thanks to the high performance of its employees. The HOWE system focuses on raising workforce productivity and improving work conditions to boost employee performance and motivation. The methods used in our research are linear correlation, INVAR multiple criteria analysis, digital twin, and affective computing. The HOWE system is based on two patents issued in Lithuania (LT 6866, LT 6841) and one European Patent application (No: EP 4 020 134 A1). Our research analyses ways to make human resource management more efficient and boost labour productivity by improving and adapting a personalised work environment. The efficiency of human capital and labour productivity can be increased by applying personalised workplace improvement systems that can optimise lighting colours and intensity, scents, data, information, knowledge, activities, media, games, videos, music, air pollution, humidity, temperature, vibrations, and other workplace aspects. HOWE generates and maintains a personalised workspace for an employee, taking into account the person’s affective, physiological and emotional (APSE) states. The purpose of this project was to create a HOWE for the customisation of quality control in smart workspaces taking into account the user’s APSE states in an integrated manner as a single unit. This customised management of quality control covers the levels of lighting and colour intensities, scents, media, information, activities, learning materials, games, music, videos, temperature, energy efficiency, the carbon footprint of a workspace, humidity, air pollution, vibrations and other aspects of smart spaces. The system is based on Digital Twins technology, seen as a logical extension of BIM.

Keywords: human resource management, health economics, work environment, organizational behaviour and employee productivity, prosperity in work, smart system

Procedia PDF Downloads 75
563 The Use of Technology in Theatrical Performances as a Tool of Audience’S Engagement

Authors: Chrysoula Bousiouta

Abstract:

Throughout the history of theatre, technology has played an important role both in influencing the relationship between performance and audience and offering different kinds of experiences. The use of technology dates back in ancient times, when the introduction of artifacts, such as “Deus ex machine” in ancient Greek theatre, started. Taking into account the key techniques and experiences used throughout history, this paper investigates how technology, through new media, influences contemporary theatre. In the context of this research, technology is defined as projections, audio environments, video-projections, sensors, tele-connections, all alongside with the performance, challenging audience’s participation. The theoretical framework of the research covers, except for the history of theatre, the theory of “experience economy” that took over the service and goods economy. The research is based on the qualitative and comparative analysis of two case studies, Contact Theatre in Manchester (United Kingdom) and Bios in Athens (Greece). The data selection includes desk research and is complemented with semi structured interviews. Building on the results of the research one could claim that the intended experience of modern/contemporary theatre is that of engagement. In this context, technology -as defined above- plays a leading role in creating it. This experience passes through and exists in the middle of the realms of entertainment, education, estheticism and escapism. Furthermore, it is observed that nowadays, theatre is not only about acting but also about performing; it is that one where the performances are unfinished without the participation of the audience. Both case studies try to achieve the experience of engagement through practices that promote the attraction of attention, the increase of imagination, the interaction, the intimacy and the true activity. These practices are achieved through the script, the scenery, the language and the environment of a performance. Contact and Bios consider technology as an intimate tool in order to accomplish the above, and they make an extended use of it. The research completes a notable record of technological techniques that modern theatres use. The use of technology, inside or outside the limits of film technique’s, helps to rivet the attention of the audience, to make performances enjoyable, to give the sense of the “unfinished” or to be used for things that take place around the spectators and force them to take action, being spect-actors. The advantage of technology is that it can be used as a hook for interaction in all stages of a performance. Further research on the field could involve exploring alternative ways of binding technology and theatre or analyzing how the performance is perceived through the use of technological artifacts.

Keywords: experience of engagement, interactive theatre, modern theatre, performance, technology

Procedia PDF Downloads 250
562 Shoreline Variation with Construction of a Pair of Training Walls, Ponnani Inlet, Kerala, India

Authors: Jhoga Parth, T. Nasar, K. V. Anand

Abstract:

An idealized definition of shoreline is that it is the zone of coincidence of three spheres such as atmosphere, lithosphere, and hydrosphere. Despite its apparent simplicity, this definition in practice a challenge to apply. In reality, the shoreline location deviates continually through time, because of various dynamic factors such as wave characteristics, currents, coastal orientation and the bathymetry, which makes the shoreline volatile. This necessitates us to monitor the shoreline in a temporal basis. If shoreline’s nature is understood at particular coastal stretch, it need not be the same trend at the other location, though belonging to the same sea front. Shoreline change is hence a local phenomenon and has to be studied with great intensity considering as many factors involved as possible. Erosion and accretion of sediment are such natures of a shoreline, which needs to be quantified by comparing with its predeceasing variations and understood before implementing any coastal projects. In recent years, advent of Global Positioning System (GPS) and Geographic Information System (GIS) acts as an emerging tool to quantify the intra and inter annual sediment rate getting accreted or deposited compared to other conventional methods in regards with time was taken and man power. Remote sensing data, on the other hand, paves way to acquire historical sets of data where field data is unavailable with a higher resolution. Short term and long term period shoreline change can be accurately tracked and monitored using a software residing in GIS - Digital Shoreline Analysis System (DSAS) developed by United States Geological Survey (USGS). In the present study, using DSAS, End Point Rate (EPR) is calculated analyze the intra-annual changes, and Linear Rate Regression (LRR) is adopted to study inter annual changes of shoreline. The shoreline changes are quantified for the scenario during the construction of breakwater in Ponnani river inlet along Kerala coast, India. Ponnani is a major fishing and landing center located 10°47’12.81”N and 75°54’38.62”E in Malappuram district of Kerala, India. The rate of erosion and accretion is explored using satellite and field data. The full paper contains the rate of change of shoreline, and its analysis would provide us understanding the behavior of the inlet at the study area during the construction of the training walls.

Keywords: DSAS, end point rate, field measurements, geo-informatics, shoreline variation

Procedia PDF Downloads 257
561 CSoS-STRE: A Combat System-of-System Space-Time Resilience Enhancement Framework

Authors: Jiuyao Jiang, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge

Abstract:

Modern warfare has transitioned from the paradigm of isolated combat forces to system-to-system confrontations due to advancements in combat technologies and application concepts. A combat system-of-systems (CSoS) is a combat network composed of independently operating entities that interact with one another to provide overall operational capabilities. Enhancing the resilience of CSoS is garnering increasing attention due to its significant practical value in optimizing network architectures, improving network security and refining operational planning. Accordingly, a unified framework called CSoS space-time resilience enhancement (CSoS-STRE) has been proposed, which enhances the resilience of CSoS by incorporating spatial features. Firstly, a multilayer spatial combat network model has been constructed, which incorporates an information layer depicting the interrelations among combat entities based on the OODA loop, along with a spatial layer that considers the spatial characteristics of equipment entities, thereby accurately reflecting the actual combat process. Secondly, building upon the combat network model, a spatiotemporal resilience optimization model is proposed, which reformulates the resilience optimization problem as a classical linear optimization model with spatial features. Furthermore, the model is extended from scenarios without obstacles to those with obstacles, thereby further emphasizing the importance of spatial characteristics. Thirdly, a resilience-oriented recovery optimization method based on improved non dominated sorting genetic algorithm II (R-INSGA) is proposed to determine the optimal recovery sequence for the damaged entities. This method not only considers spatial features but also provides the optimal travel path for multiple recovery teams. Finally, the feasibility, effectiveness, and superiority of the CSoS-STRE are demonstrated through a case study. Simultaneously, under deliberate attack conditions based on degree centrality and maximum operational loop performance, the proposed CSoS-STRE method is compared with six baseline recovery strategies, which are based on performance, time, degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. The comparison demonstrates that CSoS-STRE achieves faster convergence and superior performance.

Keywords: space-time resilience enhancement, resilience optimization model, combat system-of-systems, recovery optimization method, no-obstacles and obstacles

Procedia PDF Downloads 15
560 Peptide-Based Platform for Differentiation of Antigenic Variations within Influenza Virus Subtypes (Flutype)

Authors: Henry Memczak, Marc Hovestaedt, Bernhard Ay, Sandra Saenger, Thorsten Wolff, Frank F. Bier

Abstract:

The influenza viruses cause flu epidemics every year and serious pandemics in larger time intervals. The only cost-effective protection against influenza is vaccination. Due to rapid mutation continuously new subtypes appear, what requires annual reimmunization. For a correct vaccination recommendation, the circulating influenza strains had to be detected promptly and exactly and characterized due to their antigenic properties. During the flu season 2016/17, a wrong vaccination recommendation has been given because of the great time interval between identification of the relevant influenza vaccine strains and outbreak of the flu epidemic during the following winter. Due to such recurring incidents of vaccine mismatches, there is a great need to speed up the process chain from identifying the right vaccine strains to their administration. The monitoring of subtypes as part of this process chain is carried out by national reference laboratories within the WHO Global Influenza Surveillance and Response System (GISRS). To this end, thousands of viruses from patient samples (e.g., throat smears) are isolated and analyzed each year. Currently, this analysis involves complex and time-intensive (several weeks) animal experiments to produce specific hyperimmune sera in ferrets, which are necessary for the determination of the antigen profiles of circulating virus strains. These tests also bear difficulties in standardization and reproducibility, which restricts the significance of the results. To replace this test a peptide-based assay for influenza virus subtyping from corresponding virus samples was developed. The differentiation of the viruses takes place by a set of specifically designed peptidic recognition molecules which interact differently with the different influenza virus subtypes. The differentiation of influenza subtypes is performed by pattern recognition guided by machine learning algorithms, without any animal experiments. Synthetic peptides are immobilized in multiplex format on various platforms (e.g., 96-well microtiter plate, microarray). Afterwards, the viruses are incubated and analyzed comparing different signaling mechanisms and a variety of assay conditions. Differentiation of a range of influenza subtypes, including H1N1, H3N2, H5N1, as well as fine differentiation of single strains within these subtypes is possible using the peptide-based subtyping platform. Thereby, the platform could be capable of replacing the current antigenic characterization of influenza strains using ferret hyperimmune sera.

Keywords: antigenic characterization, influenza-binding peptides, influenza subtyping, influenza surveillance

Procedia PDF Downloads 156
559 A Broadband Tri-Cantilever Vibration Energy Harvester with Magnetic Oscillator

Authors: Xiaobo Rui, Zhoumo Zeng, Yibo Li

Abstract:

A novel tri-cantilever energy harvester with magnetic oscillator was presented, which could convert the ambient vibration into electrical energy to power the low-power devices such as wireless sensor networks. The most common way to harvest vibration energy is based on the use of linear resonant devices such as cantilever beam, since this structure creates the highest strain for a given force. The highest efficiency will be achieved when the resonance frequency of the harvester matches the vibration frequency. The limitation of the structure is the narrow effective bandwidth. To overcome this limitation, this article introduces a broadband tri-cantilever harvester with nonlinear stiffness. This energy harvester typically consists of three thin cantilever beams vertically arranged with Neodymium Magnets ( NdFeB)magnetics at its free end and a fixed base at the other end. The three cantilevers have different resonant frequencies by designed in different thicknesses. It is obviously that a similar advantage of multiple resonant frequencies as piezoelectric cantilevers array structure is built. To achieve broadband energy harvesting, magnetic interaction is used to introduce the nonlinear system stiffness to tune the resonant frequency to match the excitation. Since the three cantilever tips are all free and the magnetic force is distance dependent, the resonant frequencies will be complexly changed with the vertical vibration of the free end. Both model and experiment are built. The electromechanically coupled lumped-parameter model is presented. An electromechanical formulation and analytical expressions for the coupled nonlinear vibration response and voltage response are given. The entire structure is fabricated and mechanically attached to a electromagnetic shaker as a vibrating body via the fixed base, in order to couple the vibrations to the cantilever. The cantilevers are bonded with piezoelectric macro-fiber composite (MFC) materials (Model: M8514P2). The size of the cantilevers is 120*20mm2 and the thicknesses are separately 1mm, 0.8mm, 0.6mm. The prototype generator has a measured performance of 160.98 mW effective electrical power and 7.93 DC output voltage via the excitation level of 10m/s2. The 130% increase in the operating bandwidth is achieved. This device is promising to support low-power devices, peer-to-peer wireless nodes, and small-scale wireless sensor networks in ambient vibration environment.

Keywords: tri-cantilever, ambient vibration, energy harvesting, magnetic oscillator

Procedia PDF Downloads 154
558 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 330
557 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation

Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber

Abstract:

Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.

Keywords: indoor power line, fault location, fault map trace, series arc fault

Procedia PDF Downloads 137
556 New Suspension Mechanism for a Formula Car using Camber Thrust

Authors: Shinji Kajiwara

Abstract:

The basic ability of a vehicle is the ability to “run”, “turn” and “stop”. The safeness and comfort during a drive on various road surfaces and speed depends on the performance of these basic abilities of the vehicle. Stability and maneuverability of a vehicle is vital in automotive engineering. Stability of a vehicle is the ability of the vehicle to revert back to a stable state during a drive when faced with crosswind and irregular road conditions. Maneuverability of a vehicle is the ability of the vehicle to change direction during a drive swiftly based on the steering of the driver. The stability and maneuverability of a vehicle can also be defined as the driving stability of the vehicle. Since fossil fueled vehicle is the main type of transportation today, the environmental factor in automotive engineering is also vital. By improving the fuel efficiency of the vehicle, the overall carbon emission will be reduced thus reducing the effect of global warming and greenhouse gas on the Earth. Another main focus of the automotive engineering is the safety performance of the vehicle especially with the worrying increase of vehicle collision every day. With better safety performance on a vehicle, every driver will be more confidence driving every day. Next, let us focus on the “turn” ability of a vehicle. By improving this particular ability of the vehicle, the cornering limit of the vehicle can be improved thus increasing the stability and maneuverability factor. In order to improve the cornering limit of the vehicle, a study to find the balance between the steering systems, the stability of the vehicle, higher lateral acceleration and the cornering limit detection must be conducted. The aim of this research is to study and develop a new suspension system that that will boost the lateral acceleration of the vehicle and ultimately improving the cornering limit of the vehicle. This research will also study environmental factor and the stability factor of the new suspension system. The double wishbone suspension system is widely used in four-wheel vehicle especially for high cornering performance sports car and racing car. The double wishbone designs allow the engineer to carefully control the motion of the wheel by controlling such parameters as camber angle, caster angle, toe pattern, roll center height, scrub radius, scuff and more. The development of the new suspension system will focus on the ability of the new suspension system to optimize the camber control and to improve the camber limit during a cornering motion. The research will be carried out using the CAE analysis tool. Using this analysis tool we will develop a JSAE Formula Machine equipped with the double wishbone system and also the new suspension system and conduct simulation and conduct studies on performance of both suspension systems.

Keywords: automobile, camber thrust, cornering force, suspension

Procedia PDF Downloads 323
555 Model Reference Adaptive Approach for Power System Stabilizer for Damping of Power Oscillations

Authors: Jožef Ritonja, Bojan Grčar, Boštjan Polajžer

Abstract:

In recent years, electricity trade between neighboring countries has become increasingly intense. Increasing power transmission over long distances has resulted in an increase in the oscillations of the transmitted power. The damping of the oscillations can be carried out with the reconfiguration of the network or the replacement of generators, but such solution is not economically reasonable. The only cost-effective solution to improve the damping of power oscillations is to use power system stabilizers. Power system stabilizer represents a part of synchronous generator control system. It utilizes semiconductor’s excitation system connected to the rotor field excitation winding to increase the damping of the power system. The majority of the synchronous generators are equipped with the conventional power system stabilizers with fixed parameters. The control structure of the conventional power system stabilizers and the tuning procedure are based on the linear control theory. Conventional power system stabilizers are simple to realize, but they show non-sufficient damping improvement in the entire operating conditions. This is the reason that advanced control theories are used for development of better power system stabilizers. In this paper, the adaptive control theory for power system stabilizers design and synthesis is studied. The presented work is focused on the use of model reference adaptive control approach. Control signal, which assures that the controlled plant output will follow the reference model output, is generated by the adaptive algorithm. Adaptive gains are obtained as a combination of the "proportional" term and with the σ-term extended "integral" term. The σ-term is introduced to avoid divergence of the integral gains. The necessary condition for asymptotic tracking is derived by means of hyperstability theory. The benefits of the proposed model reference adaptive power system stabilizer were evaluated as objectively as possible by means of a theoretical analysis, numerical simulations and laboratory realizations. Damping of the synchronous generator oscillations in the entire operating range was investigated. Obtained results show the improved damping in the entire operating area and the increase of the power system stability. The results of the presented work will help by the development of the model reference power system stabilizer which should be able to replace the conventional stabilizers in power systems.

Keywords: power system, stability, oscillations, power system stabilizer, model reference adaptive control

Procedia PDF Downloads 138
554 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation

Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim

Abstract:

In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.

Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement

Procedia PDF Downloads 117
553 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach

Authors: Kanika Gupta, Ashok Kumar

Abstract:

Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.

Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database

Procedia PDF Downloads 170
552 Application of Carbon Nanotubes as Cathodic Corrosion Protection of Steel Reinforcement

Authors: M. F. Perez, Ysmael Verde, B. Escobar, R. Barbosa, J. C. Cruz

Abstract:

Reinforced concrete is one of the most important materials in the construction industry. However, in recent years the durability of concrete structures has been a worrying problem, mainly due to corrosion of reinforcing steel; the consequences of corrosion in all cases lead to shortening of the life of the structure and decrease in quality of service. Since the emergence of this problem, they have implemented different methods or techniques to reduce damage by corrosion of reinforcing steel in concrete structures; as the use of polymeric materials as coatings for the steel rod, spiked inhibitors of concrete during mixing, among others, presenting different limitations in the application of these methods. Because of this, it has been used a method that has proved effective, cathodic protection. That is why due to the properties attributed to carbon nanotubes (CNT), these could act as cathodic corrosion protection. Mounting a three-electrode electrochemical cell, carbon steel as working electrode, saturated calomel electrode (SCE) as the reference electrode, and a graphite rod as a counter electrode to close the system is performed. Samples made were subjected to a cycling process in order to compare the results in the corrosion performance of a coating composed of CNT and the others based on an anticorrosive commercial painting. The samples were tested at room temperature using an electrolyte consisting NaCl and NaOH simulating the typical pH of concrete, ranging from 12.6 to 13.9. Three test samples were made of steel rod, white, with commercial anticorrosive paint and CNT based coating; delimiting the work area to a section of 0.71 cm2. Tests cyclic voltammetry and linear voltammetry electrochemical spectroscopy each impedance of the three samples were made with a window of potential vs SCE 0.7 -1.7 a scan rate of 50 mV / s and 100 mV / s. The impedance values were obtained by applying a sine wave of amplitude 50 mV in a frequency range of 100 kHz to 100 MHz. The results obtained in this study show that the CNT based coating applied to the steel rod considerably decreased the corrosion rate compared to the commercial coating of anticorrosive paint, because the Ecorr was passed increase as the cycling process. The samples tested in all three cases were observed by light microscopy throughout the cycling process and micrographic analysis was performed using scanning electron microscopy (SEM). Results from electrochemical measurements show that the application of the coating containing carbon nanotubes on the surface of the steel rod greatly increases the corrosion resistance, compared to commercial anticorrosive coating.

Keywords: anticorrosive, carbon nanotubes, corrosion, steel

Procedia PDF Downloads 477