Search results for: operating rule curve
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3999

Search results for: operating rule curve

339 Enhancing Seismic Resilience in Colombia's Informal Housing: A Low-cost Retrofit Strategy with Buckling-restrained Braces to Protect Vulnerable Communities in Earthquake-prone Regions

Authors: Luis F. Caballero-castro, Dirsa Feliciano, Daniela Novoa, Orlando Arroyo, Jesús D. Villalba-morales

Abstract:

Colombia faces a critical challenge in seismic resilience due to the prevalence of informal housing, which constitutes approximately 70% of residential structures. More than 10 million Colombians (20% of the population), live in homes susceptible to collapse in the event of an earthquake. This, combined with the fact that 83% of the population is in intermediate and high seismic hazard areas, has brought serious consequences to the country. These consequences became evident during the 1999 Armenia earthquake, which affected nearly 100,000 properties and represented economic losses equivalent to 1.88% of that year's Gross Domestic Product (GDP). Despite previous efforts to reinforce informal housing through methods like externally reinforced masonry walls, alternatives related to seismic protection systems (SPDs), such as Buckling-Restrained Braces (BRB), have not yet been explored in the country. BRBs are reinforcement elements capable of withstanding both compression and tension, making them effective in enhancing the lateral stiffness of structures. In this study, the use of low-cost and easily installable BRBs for the retrofit of informal housing in Colombia was evaluated, considering the economic limitations of the communities. For this purpose, a case study was selected involving an informally constructed dwelling in the country, from which field information on its structural characteristics and construction materials was collected. Based on the gathered information, nonlinear models with and without BRBs were created, and their seismic performance was analyzed and compared through incremental static (pushover) and nonlinear dynamic analyses. In the first analysis, the capacity curve was identified, showcasing the sequence of failure events occurring from initial yielding to structural collapse. In the second case, the model underwent nonlinear dynamic analyses using a set of seismic records consistent with the country's seismic hazard. Based on the results, fragility curves were calculated to evaluate the probability of failure of the informal housings before and after the intervention with BRBs, providing essential information about their effectiveness in reducing seismic vulnerability. The results indicate that low-cost BRBs can significantly increase the capacity of informal housing to withstand earthquakes. The dynamic analysis revealed that retrofit structures experienced lower displacements and deformations, enhancing the safety of residents and the seismic performance of informally constructed houses. In other words, the use of low-cost BRBs in the retrofit of informal housing in Colombia is a promising strategy for improving structural safety in seismic-prone areas. This study emphasizes the importance of seeking affordable and practical solutions to address seismic risk in vulnerable communities in earthquake-prone regions in Colombia and serves as a model for addressing similar challenges of informal housing worldwide.

Keywords: buckling-restrained braces, fragility curves, informal housing, incremental dynamic analysis, seismic retrofit

Procedia PDF Downloads 87
338 High Cycle Fatigue Analysis of a Lower Hopper Knuckle Connection of a Large Bulk Carrier under Dynamic Loading

Authors: Vaso K. Kapnopoulou, Piero Caridis

Abstract:

The fatigue of ship structural details is of major concern in the maritime industry as it can generate fracture issues that may compromise structural integrity. In the present study, a fatigue analysis of the lower hopper knuckle connection of a bulk carrier was conducted using the Finite Element Method by means of ABAQUS/CAE software. The fatigue life was calculated using Miner’s Rule and the long-term distribution of stress range by the use of the two-parameter Weibull distribution. The cumulative damage ratio was estimated using the fatigue damage resulting from the stress range occurring at each load condition. For this purpose, a cargo hold model was first generated, which extends over the length of two holds (the mid-hold and half of each of the adjacent holds) and transversely over the full breadth of the hull girder. Following that, a submodel of the area of interest was extracted in order to calculate the hot spot stress of the connection and to estimate the fatigue life of the structural detail. Two hot spot locations were identified; one at the top layer of the inner bottom plate and one at the top layer of the hopper plate. The IACS Common Structural Rules (CSR) require that specific dynamic load cases for each loading condition are assessed. Following this, the dynamic load case that causes the highest stress range at each loading condition should be used in the fatigue analysis for the calculation of the cumulative fatigue damage ratio. Each load case has a different effect on ship hull response. Of main concern, when assessing the fatigue strength of the lower hopper knuckle connection, was the determination of the maximum, i.e. the critical value of the stress range, which acts in a direction normal to the weld toe line. This acts in the transverse direction, that is, perpendicularly to the ship's centerline axis. The load cases were explored both theoretically and numerically in order to establish the one that causes the highest damage to the location examined. The most severe one was identified to be the load case induced by beam sea condition where the encountered wave comes from the starboard. At the level of the cargo hold model, the model was assumed to be simply supported at its ends. A coarse mesh was generated in order to represent the overall stiffness of the structure. The elements employed were quadrilateral shell elements, each having four integration points. A linear elastic analysis was performed because linear elastic material behavior can be presumed, since only localized yielding is allowed by most design codes. At the submodel level, the displacements of the analysis of the cargo hold model to the outer region nodes of the submodel acted as boundary conditions and applied loading for the submodel. In order to calculate the hot spot stress at the hot spot locations, a very fine mesh zone was generated and used. The fatigue life of the detail was found to be 16.4 years which is lower than the design fatigue life of the structure (25 years), making this location vulnerable to fatigue fracture issues. Moreover, the loading conditions that induce the most damage to the location were found to be the various ballasting conditions.

Keywords: dynamic load cases, finite element method, high cycle fatigue, lower hopper knuckle

Procedia PDF Downloads 413
337 Review of the Safety of Discharge on the First Postoperative Day Following Carotid Surgery: A Retrospective Analysis

Authors: John Yahng, Hansraj Riteesh Bookun

Abstract:

Objective: This was a retrospective cross-sectional study evaluating the safety of discharge on the first postoperative day following carotid surgery - principally carotid endarterectomy. Methods: Between January 2010 to October 2017, 252 patients with mean age of 72 years, underwent carotid surgery by seven surgeons. Their medical records were consulted and their operative as well as complication timelines were databased. Descriptive statistics were used to analyse pooled responses and our indicator variables. The statistical package used was STATA 13. Results: There were 183 males (73%) and the comorbid burden was as follows: ischaemic heart disease (54%), diabetes (38%), hypertension (92%), stage 4 kidney impairment (5%) and current or ex-smoking (77%). The main indications were transient ischaemic attacks (42%), stroke (31%), asymptomatic carotid disease (16%) and amaurosis fugax (8%). 247 carotid endarterectomies (109 with patch arterioplasty, 88 with eversion and transection technique, 50 with endarterectomy only) were performed. 2 carotid bypasses, 1 embolectomy, 1 thrombectomy with patch arterioplasty and 1 excision of a carotid body tumour were also performed. 92% of the cases were performed under general anaesthesia. A shunt was used in 29% of cases. The mean length of stay was 5.1 ± 3.7days with the range of 2 to 22 days. No patient was discharged on day 1. The mean time from admission to surgery was 1.4 ± 2.8 days, ranging from 0 to 19 days. The mean time from surgery to discharge was 2.7 ± 2.0 days with the of range 0 to 14 days. 36 complications were encountered over this period, with 12 failed repairs (5 major strokes, 2 minor strokes, 3 transient ischaemic attacks, 1 cerebral bleed, 1 occluded graft), 11 bleeding episodes requiring a return to the operating theatre, 5 adverse cardiac events, 3 cranial nerve injuries, 2 respiratory complications, 2 wound complications and 1 acute kidney injury. There were no deaths. 17 complications occurred on postoperative day 0, 11 on postoperative day 1, 6 on postoperative day 2 and 2 on postoperative day 3. 78% of all complications happened before the second postoperative day. Out of the complications which occurred on the second or third postoperative day, 4 (1.6%) were bleeding episodes, 1 (0.4%) failed repair , 1 respiratory complication (0.4%) and 1 wound complication (0.4%). Conclusion: Although it has been common practice to discharge patients on the second postoperative day following carotid endarterectomy, we find here that discharge on the first operative day is safe. The overall complication rate is low and most complications are captured before the second postoperative day. We suggest that patients having an uneventful first 24 hours post surgery be discharged on the first day. This should reduce hospital length of stay and the health economic burden.

Keywords: carotid, complication, discharge, surgery

Procedia PDF Downloads 161
336 Switching of Series-Parallel Connected Modules in an Array for Partially Shaded Conditions in a Pollution Intensive Area Using High Powered MOSFETs

Authors: Osamede Asowata, Christo Pienaar, Johan Bekker

Abstract:

Photovoltaic (PV) modules may become a trend for future PV systems because of their greater flexibility in distributed system expansion, easier installation due to their nature, and higher system-level energy harnessing capabilities under shaded or PV manufacturing mismatch conditions. This is as compared to the single or multi-string inverters. Novel residential scale PV arrays are commonly connected to the grid by a single DC–AC inverter connected to a series, parallel or series-parallel string of PV panels, or many small DC–AC inverters which connect one or two panels directly to the AC grid. With an increasing worldwide interest in sustainable energy production and use, there is renewed focus on the power electronic converter interface for DC energy sources. Three specific examples of such DC energy sources that will have a role in distributed generation and sustainable energy systems are the photovoltaic (PV) panel, the fuel cell stack, and batteries of various chemistries. A high-efficiency inverter using Metal Oxide Semiconductor Field-Effect Transistors (MOSFETs) for all active switches is presented for a non-isolated photovoltaic and AC-module applications. The proposed configuration features a high efficiency over a wide load range, low ground leakage current and low-output AC-current distortion with no need for split capacitors. The detailed power stage operating principles, pulse width modulation scheme, multilevel bootstrap power supply, and integrated gate drivers for the proposed inverter is described. Experimental results of a hardware prototype, show that not only are MOSFET efficient in the system, it also shows that the ground leakage current issues are alleviated in the proposed inverter and also a 98 % maximum associated driver circuit is achieved. This, in turn, provides the need for a possible photovoltaic panel switching technique. This will help to reduce the effect of cloud movements as well as improve the overall efficiency of the system.

Keywords: grid connected photovoltaic (PV), Matlab efficiency simulation, maximum power point tracking (MPPT), module integrated converters (MICs), multilevel converter, series connected converter

Procedia PDF Downloads 117
335 Tagging a corpus of Media Interviews with Diplomats: Challenges and Solutions

Authors: Roberta Facchinetti, Sara Corrizzato, Silvia Cavalieri

Abstract:

Increasing interconnection between data digitalization and linguistic investigation has given rise to unprecedented potentialities and challenges for corpus linguists, who need to master IT tools for data analysis and text processing, as well as to develop techniques for efficient and reliable annotation in specific mark-up languages that encode documents in a format that is both human and machine-readable. In the present paper, the challenges emerging from the compilation of a linguistic corpus will be taken into consideration, focusing on the English language in particular. To do so, the case study of the InterDiplo corpus will be illustrated. The corpus, currently under development at the University of Verona (Italy), represents a novelty in terms both of the data included and of the tag set used for its annotation. The corpus covers media interviews and debates with diplomats and international operators conversing in English with journalists who do not share the same lingua-cultural background as their interviewees. To date, this appears to be the first tagged corpus of international institutional spoken discourse and will be an important database not only for linguists interested in corpus analysis but also for experts operating in international relations. In the present paper, special attention will be dedicated to the structural mark-up, parts of speech annotation, and tagging of discursive traits, that are the innovational parts of the project being the result of a thorough study to find the best solution to suit the analytical needs of the data. Several aspects will be addressed, with special attention to the tagging of the speakers’ identity, the communicative events, and anthropophagic. Prominence will be given to the annotation of question/answer exchanges to investigate the interlocutors’ choices and how such choices impact communication. Indeed, the automated identification of questions, in relation to the expected answers, is functional to understand how interviewers elicit information as well as how interviewees provide their answers to fulfill their respective communicative aims. A detailed description of the aforementioned elements will be given using the InterDiplo-Covid19 pilot corpus. The data yielded by our preliminary analysis of the data will highlight the viable solutions found in the construction of the corpus in terms of XML conversion, metadata definition, tagging system, and discursive-pragmatic annotation to be included via Oxygen.

Keywords: spoken corpus, diplomats’ interviews, tagging system, discursive-pragmatic annotation, english linguistics

Procedia PDF Downloads 175
334 Design and Implementation of a Hardened Cryptographic Coprocessor with 128-bit RISC-V Core

Authors: Yashas Bedre Raghavendra, Pim Vullers

Abstract:

This study presents the design and implementation of an abstract cryptographic coprocessor, leveraging AMBA(Advanced Microcontroller Bus Architecture) protocols - APB (Advanced Peripheral Bus) and AHB (Advanced High-performance Bus), to enable seamless integration with the main CPU(Central processing unit) and enhance the coprocessor’s algorithm flexibility. The primary objective is to create a versatile coprocessor that can execute various cryptographic algorithms, including ECC(Elliptic-curve cryptography), RSA(Rivest–Shamir–Adleman), and AES (Advanced Encryption Standard) while providing a robust and secure solution for modern secure embedded systems. To achieve this goal, the coprocessor is equipped with a tightly coupled memory (TCM) for rapid data access during cryptographic operations. The TCM is placed within the coprocessor, ensuring quick retrieval of critical data and optimizing overall performance. Additionally, the program memory is positioned outside the coprocessor, allowing for easy updates and reconfiguration, which enhances adaptability to future algorithm implementations. Direct links are employed instead of DMA(Direct memory access) for data transfer, ensuring faster communication and reducing complexity. The AMBA-based communication architecture facilitates seamless interaction between the coprocessor and the main CPU, streamlining data flow and ensuring efficient utilization of system resources. The abstract nature of the coprocessor allows for easy integration of new cryptographic algorithms in the future. As the security landscape continues to evolve, the coprocessor can adapt and incorporate emerging algorithms, making it a future-proof solution for cryptographic processing. Furthermore, this study explores the addition of custom instructions into RISC-V ISE (Instruction Set Extension) to enhance cryptographic operations. By incorporating custom instructions specifically tailored for cryptographic algorithms, the coprocessor achieves higher efficiency and reduced cycles per instruction (CPI) compared to traditional instruction sets. The adoption of RISC-V 128-bit architecture significantly reduces the total number of instructions required for complex cryptographic tasks, leading to faster execution times and improved overall performance. Comparisons are made with 32-bit and 64-bit architectures, highlighting the advantages of the 128-bit architecture in terms of reduced instruction count and CPI. In conclusion, the abstract cryptographic coprocessor presented in this study offers significant advantages in terms of algorithm flexibility, security, and integration with the main CPU. By leveraging AMBA protocols and employing direct links for data transfer, the coprocessor achieves high-performance cryptographic operations without compromising system efficiency. With its TCM and external program memory, the coprocessor is capable of securely executing a wide range of cryptographic algorithms. This versatility and adaptability, coupled with the benefits of custom instructions and the 128-bit architecture, make it an invaluable asset for secure embedded systems, meeting the demands of modern cryptographic applications.

Keywords: abstract cryptographic coprocessor, AMBA protocols, ECC, RSA, AES, tightly coupled memory, secure embedded systems, RISC-V ISE, custom instructions, instruction count, cycles per instruction

Procedia PDF Downloads 62
333 Effect of Downstream Pressure in Tuning the Flow Control Orifices of Pressure Fed Reaction Control System Thrusters

Authors: Prakash M.N, Mahesh G, Muhammed Rafi K.M, Shiju P. Nair

Abstract:

Introduction: In launch vehicle missions, Reaction Control thrusters are being used for the three-axis stabilization of the vehicle during the coasting phases. A pressure-fed propulsion system is used for the operation of these thrusters due to its less complexity. In liquid stages, these thrusters are designed to draw propellant from the same tank used for the main propulsion system. So in order to regulate the propellant flow rates of these thrusters, flow control orifices are used in feed lines. These orifices are calibrated separately as per the flow rate requirement of individual thrusters for the nominal operating conditions. In some missions, it was observed that the thrusters were operated at higher thrust than nominal. This point was addressed through a series of cold flow and hot tests carried out in-ground and this paper elaborates the details of the same. Discussion: In order to find out the exact reason for this phenomenon, two flight configuration thrusters were identified and hot tested in the ground with calibrated orifices and feed lines. During these tests, the chamber pressure, which is directly proportional to the thrust, is measured. In both cases, chamber pressures higher than the nominal by 0.32bar to 0.7bar were recorded. The increase in chamber pressure is due to an increase in the oxidizer flow rate of both the thrusters. Upon further investigation, it is observed that the calibration of the feed line is done with ambient pressure downstream. But in actual flight conditions, the orifices will be subjected to operate with 10 to 11bar pressure downstream. Due to this higher downstream pressure, the flow through the orifices increases and thereby, the thrusters operate with higher chamber pressure values. Conclusion: As part of further investigatory tests, two numbers of fresh thrusters were realized. Orifice tuning of these thrusters was carried out in three different ways. In the first trial, the orifice tuning was done by simulating 1bar pressure downstream. The second trial was done with the injector assembled downstream. In the third trial, the downstream pressure equal to the flight injection pressure was simulated downstream. Using these calibrated orifices, hot tests were carried out in simulated vacuum conditions. Chamber pressure and flow rate values were exactly matching with the prediction for the second and third trials. But for the first trial, the chamber pressure values obtained in the hot test were more than the prediction. This clearly shows that the flow is detached in the 1st trial and attached for the 2nd & 3rd trials. Hence, the error in tuning the flow control orifices is pinpointed as the reason for this higher chamber pressure observed in flight.

Keywords: reaction control thruster, propellent, orifice, chamber pressure

Procedia PDF Downloads 195
332 Grain Structure Evolution during Friction-Stir Welding of 6061-T6 Aluminum Alloy

Authors: Aleksandr Kalinenko, Igor Vysotskiy, Sergey Malopheyev, Sergey Mironov, Rustam Kaibyshev

Abstract:

From a thermo-mechanical standpoint, friction-stir welding (FSW) represents a unique combination of very large strains, high temperature and relatively high strain rate. The material behavior under such extreme deformation conditions is not studied well and thus, the microstructural examinations of the friction-stir welded materials represent an essential academic interest. Moreover, a clear understanding of the microstructural mechanisms operating during FSW should improve our understanding of the microstructure-properties relationship in the FSWed materials and thus enables us to optimize their service characteristics. Despite extensive research in this field, the microstructural behavior of some important structural materials remains not completely clear. In order to contribute to this important work, the present study was undertaken to examine the grain structure evolution during the FSW of 6061-T6 aluminum alloy. To provide an in-depth insight into this process, the electron backscatter diffraction (EBSD) technique was employed for this purpose. Microstructural observations were conducted by using an FEI Quanta 450 Nova field-emission-gun scanning electron microscope equipped with TSL OIMTM software. A suitable surface finish for EBSD was obtained by electro-polishing in a solution of 25% nitric acid in methanol. A 15° criterion was employed to differentiate low-angle boundaries (LABs) from high-angle boundaries (HABs). In the entire range of the studied FSW regimes, the grain structure evolved in the stir zone was found to be dominated by nearly-equiaxed grains with a relatively high fraction of low-angle boundaries and the moderate-strength B/-B {112}<110> simple-shear texture. In all cases, the grain-structure development was found to be dictated by an extensive formation of deformation-induced boundaries, their gradual transformation to the high-angle grain boundaries. Accordingly, the grain subdivision was concluded to the key microstructural mechanism. Remarkably, a gradual suppression of this mechanism has been observed at relatively high welding temperatures. This surprising result has been attributed to the reduction of dislocation density due to the annihilation phenomena.

Keywords: electron backscatter diffraction, friction-stir welding, heat-treatable aluminum alloys, microstructure

Procedia PDF Downloads 232
331 Learning from Dendrites: Improving the Point Neuron Model

Authors: Alexander Vandesompele, Joni Dambre

Abstract:

The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.

Keywords: dendritic computation, spiking neural networks, point neuron model

Procedia PDF Downloads 124
330 Economic and Environmental Assessment of Heat Recovery in Beer and Spirit Production

Authors: Isabel Schestak, Jan Spriet, David Styles, Prysor Williams

Abstract:

Breweries and distilleries are well-known for their high water usage. The water consumption in a UK brewery to produce one litre of beer reportedly ranges from 3-9 L and in a distillery from 7-45 L to produce a litre of spirit. This includes product water such as mashing water, but also water for wort and distillate cooling and for cleaning of tanks, casks, and kegs. When cooling towers are used, cooling water can be the dominating water consumption in a brewery or distillery. Interlinked to the high water use is a substantial heating requirement for mashing, wort boiling, or distillation, typically met by fossil fuel combustion such as gasoil. Many water and waste water streams are leaving the processes hot, such as the returning cooling water or the pot ales. Therefore, several options exist to optimise water and energy efficiency of spirit production through heat recovery. Although these options are known in the sector, they are often not applied in practice due to planning efforts or financial obstacles. In this study, different possibilities and design options for heat recovery systems are explored in four breweries/distilleries in the UK and assessed from an economic but also environmental point of view. The eco-efficiency methodology, according to ISO 14045, is applied to combine both assessment criteria to determine the optimum solution for heat recovery application in practice. The economic evaluation is based on the total value added (TVA) while the Life Cycle Assessment (LCA) methodology is applied to account for the environmental impacts through the installations required for heat recovery. The four case study businesses differ in a) production scale with mashing volumes ranging from 2500 to 40,000 L, in b) terms of heating and cooling technology used, and in c) the extent to which heat recovery is/is not applied. This enables the evaluation of different cases for heat recovery based on empirical data. The analysis provides guidelines for practitioners in the brewing and distilling sector in and outside the UK for the realisation of heat recovery measures. Financial and environmental payback times are showcased for heat recovery systems in the four distilleries which are operating at different production scales. The results are expected to encourage the application of heat recovery where environmentally and economically beneficial and ultimately contribute to a reduction of the water and energy footprint in brewing and distilling businesses.

Keywords: brewery, distillery, eco-efficiency, heat recovery from process and waste water, life cycle assessment

Procedia PDF Downloads 116
329 Artificial Intelligence: Obstacles Patterns and Implications

Authors: Placide Poba-Nzaou, Anicet Tchibozo, Malatsi Galani, Ali Etkkali, Erwin Halim

Abstract:

Artificial intelligence (AI) is a general-purpose technology that is transforming many industries, working life and society by stimulating economic growth and innovation. Despite the huge potential of benefits to be generated, the adoption of AI varies from one organization to another, from one region to another, and from one industry to another, due in part to obstacles that can inhibit an organization or organizations located in a specific geographic region or operating in a specific industry from adopting AI technology. In this context, these obstacles and their implications for AI adoption from the perspective of configurational theory is important for at least three reasons: (1) understanding these obstacles is the first step in enabling policymakers and providers to make an informed decision in stimulating AI adoption (2) most studies have investigating obstacles or challenges of AI adoption in isolation with linear assumptions while configurational theory offers a holistic and multifaceted way of investigating the intricate interactions between perceived obstacles and barriers helping to assess their synergetic combination while holding assumptions of non-linearity leading to insights that would otherwise be out of the scope of studies investigating these obstacles in isolation. This study aims to pursue two objectives: (1) characterize organizations by uncovering the typical profiles of combinations of 15 internal and external obstacles that may prevent organizations from adopting AI technology, (2) assess the variation in terms of intensity of AI adoption associated with each configuration. We used data from a survey of AI adoption by organizations conducted throughout the EU27, Norway, Iceland and the UK (N=7549). Cluster analysis and discriminant analysis help uncover configurations of organizations based on the 15 obstacles, including eight external and seven internal. Second, we compared the clusters according to AI adoption intensity using an analysis of variance (ANOVA) and a Tamhane T2 post hoc test. The study uncovers three strongly separated clusters of organizations based on perceived obstacles to AI adoption. The clusters are labeled according to their magnitude of perceived obstacles to AI adoption: (1) Cluster I – High Level of perceived obstacles (N = 2449, 32.4%)(2) Cluster II – Low Level of perceived obstacles (N =1879, 24.9%) (3) Cluster III – Moderate Level of perceived obstacles (N =3221, 42.7%). The proposed taxonomy goes beyond the normative understanding of perceived obstacles to AI adoption and associated implications: it provides a well-structured and parsimonious lens that is useful for policymakers, AI technology providers, and researchers. Surprisingly, the ANOVAs revealed a “high level of perceived obstacles” cluster associated with a significantly high intensity of AI adoption.

Keywords: Artificial intelligence (AI), obstacles, adoption, taxonomy.

Procedia PDF Downloads 95
328 Characterizing the Rectification Process for Designing Scoliosis Braces: Towards Digital Brace Design

Authors: Inigo Sanz-Pena, Shanika Arachchi, Dilani Dhammika, Sanjaya Mallikarachchi, Jeewantha S. Bandula, Alison H. McGregor, Nicolas Newell

Abstract:

The use of orthotic braces for adolescent idiopathic scoliosis (AIS) patients is the most common non-surgical treatment to prevent deformity progression. The traditional method to create an orthotic brace involves casting the patient’s torso to obtain a representative geometry, which is then rectified by an orthotist to the desired geometry of the brace. Recent improvements in 3D scanning technologies, rectification software, CNC, and additive manufacturing processes have given the possibility to compliment, or in some cases, replace manual methods with digital approaches. However, the rectification process remains dependent on the orthotist’s skills. Therefore, the rectification process needs to be carefully characterized to ensure that braces designed through a digital workflow are as efficient as those created using a manual process. The aim of this study is to compare 3D scans of patients with AIS against 3D scans of both pre- and post-rectified casts that have been manually shaped by an orthotist. Six AIS patients were recruited from the Ragama Rehabilitation Clinic, Colombo, Sri Lanka. All patients were between 10 and 15 years old, were skeletally immature (Risser grade 0-3), and had Cobb angles between 20-45°. Seven spherical markers were placed at key anatomical locations on each patient’s torso and on the pre- and post-rectified molds so that distances could be reliably measured. 3D scans were obtained of 1) the patient’s torso and pelvis, 2) the patient’s pre-rectification plaster mold, and 3) the patient’s post-rectification plaster mold using a Structure Sensor Mark II 3D scanner (Occipital Inc., USA). 3D stick body models were created for each scan to represent the distances between anatomical landmarks. The 3D stick models were used to analyze the changes in position and orientation of the anatomical landmarks between scans using Blender open-source software. 3D Surface deviation maps represented volume differences between the scans using CloudCompare open-source software. The 3D stick body models showed changes in the position and orientation of thorax anatomical landmarks between the patient and the post-rectification scans for all patients. Anatomical landmark position and volume differences were seen between 3D scans of the patient’s torsos and the pre-rectified molds. Between the pre- and post-rectified molds, material removal was consistently seen on the anterior side of the thorax and the lateral areas below the ribcage. Volume differences were seen in areas where the orthotist planned to place pressure pads (usually at the trochanter on the side to which the lumbar curve was tilted (trochanter pad), at the lumbar apical vertebra (lumbar pad), on the rib connected to the apical vertebrae at the mid-axillary line (thoracic pad), and on the ribs corresponding to the upper thoracic vertebra (axillary extension pad)). The rectification process requires the skill and experience of an orthotist; however, this study demonstrates that the brace shape, location, and volume of material removed from the pre-rectification mold can be characterized and quantified. Results from this study can be fed into software that can accelerate the brace design process and make steps towards the automated digital rectification process.

Keywords: additive manufacturing, orthotics, scoliosis brace design, sculpting software, spinal deformity

Procedia PDF Downloads 142
327 Mapping the Turbulence Intensity and Excess Energy Available to Small Wind Systems over 4 Major UK Cities

Authors: Francis C. Emejeamara, Alison S. Tomlin, James Gooding

Abstract:

Due to the highly turbulent nature of urban air flows, and by virtue of the fact that turbines are likely to be located within the roughness sublayer of the urban boundary layer, proposed urban wind installations are faced with major challenges compared to rural installations. The challenge of operating within turbulent winds can however, be counteracted by the development of suitable gust tracking solutions. In order to assess the cost effectiveness of such controls, a detailed understanding of the urban wind resource, including its turbulent characteristics, is required. Estimating the ambient turbulence and total kinetic energy available at different control response times is essential in evaluating the potential performance of wind systems within the urban environment should effective control solutions be employed. However, high resolution wind measurements within the urban roughness sub-layer are uncommon, and detailed CFD modelling approaches are too computationally expensive to apply routinely on a city wide scale. This paper therefore presents an alternative semi-empirical methodology for estimating the excess energy content (EEC) present in the complex and gusty urban wind. An analytical methodology for predicting the total wind energy available at a potential turbine site is proposed by assessing the relationship between turbulence intensities and EEC, for different control response times. The semi-empirical model is then incorporated with an analytical methodology that was initially developed to predict mean wind speeds at various heights within the built environment based on detailed mapping of its aerodynamic characteristics. Based on the current methodology, additional estimates of turbulence intensities and EEC allow a more complete assessment of the available wind resource. The methodology is applied to 4 UK cities with results showing the potential of mapping turbulence intensities and the total wind energy available at different heights within each city. Considering the effect of ambient turbulence and choice of wind system, the wind resource over neighbourhood regions (of 250 m uniform resolution) and building rooftops within the 4 cities were assessed with results highlighting the promise of mapping potential turbine sites within each city.

Keywords: excess energy content, small-scale wind, turbulence intensity, urban wind energy, wind resource assessment

Procedia PDF Downloads 469
326 Simple Finite-Element Procedure for Modeling Crack Propagation in Reinforced Concrete Bridge Deck under Repetitive Moving Truck Wheel Loads

Authors: Rajwanlop Kumpoopong, Sukit Yindeesuk, Pornchai Silarom

Abstract:

Modeling cracks in concrete is complicated by its strain-softening behavior which requires the use of sophisticated energy criteria of fracture mechanics to assure stable and convergent solutions in the finite-element (FE) analysis particularly for relatively large structures. However, for small-scale structures such as beams and slabs, a simpler approach relies on retaining some shear stiffness in the cracking plane has been adopted in literature to model the strain-softening behavior of concrete under monotonically increased loading. According to the shear retaining approach, each element is assumed to be an isotropic material prior to cracking of concrete. Once an element is cracked, the isotropic element is replaced with an orthotropic element in which the new orthotropic stiffness matrix is formulated with respect to the crack orientation. The shear transfer factor of 0.5 is used in parallel to the crack plane. The shear retaining approach is adopted in this research to model cracks in RC bridge deck with some modifications to take into account the effect of repetitive moving truck wheel loads as they cause fatigue cracking of concrete. First modification is the introduction of fatigue tests of concrete and reinforcing steel and the Palmgren-Miner linear criterion of cumulative damage in the conventional FE analysis. For a certain loading, the number of cycles to failure of each concrete or RC element can be calculated from the fatigue or S-N curves of concrete and reinforcing steel. The elements with the minimum number of cycles to failure are the failed elements. For the elements that do not fail, the damage is accumulated according to Palmgren-Miner linear criterion of cumulative damage. The stiffness of the failed element is modified and the procedure is repeated until the deck slab fails. The total number of load cycles to failure of the deck slab can then be obtained from which the S-N curve of the deck slab can be simulated. Second modification is the modification in shear transfer factor. Moving loading causes continuous rubbing of crack interfaces which greatly reduces shear transfer mechanism. It is therefore conservatively assumed in this study that the analysis is conducted with shear transfer factor of zero for the case of moving loading. A customized FE program has been developed using the MATLAB software to accomodate such modifications. The developed procedure has been validated with the fatigue test of the 1/6.6-scale AASHTO bridge deck under the applications of both fixed-point repetitive loading and moving loading presented in the literature. Results are in good agreement both experimental vs. simulated S-N curves and observed vs. simulated crack patterns. Significant contribution of the developed procedure is a series of S-N relations which can now be simulated at any desired levels of cracking in addition to the experimentally derived S-N relation at the failure of the deck slab. This permits the systematic investigation of crack propagation or deterioration of RC bridge deck which is appeared to be useful information for highway agencies to prolong the life of their bridge decks.

Keywords: bridge deck, cracking, deterioration, fatigue, finite-element, moving truck, reinforced concrete

Procedia PDF Downloads 245
325 The Structure and Development of a Wing Tip Vortex under the Effect of Synthetic Jet Actuation

Authors: Marouen Dghim, Mohsen Ferchichi

Abstract:

The effect of synthetic jet actuation on the roll-up and the development of a wing tip vortex downstream a square-tipped rectangular wing was investigated experimentally using hotwire anemometry. The wing is equipped with a hallow cavity designed to generate a high aspect ratio synthetic jets blowing at an angles with respect to the spanwise direction. The structure of the wing tip vortex under the effect of fluidic actuation was examined at a chord Reynolds number Re_c=8×10^4. An extensive qualitative study on the effect of actuation on the spanwise pressure distribution at c⁄4 was achieved using pressure scanner measurements in order to determine the optimal actuation parameters namely, the blowing momentum coefficient, Cμ, and the non-dimensionalized actuation frequency, F^+. A qualitative study on the effect of actuation parameters on the spanwise pressure distribution showed that optimal actuation frequencies of the synthetic jet were found within the range amplified by both long and short wave instabilities where spanwise pressure coefficients exhibited a considerable decrease by up to 60%. The vortex appeared larger and more diffuse than that of the natural vortex case. Operating the synthetic jet seemed to introduce unsteadiness and turbulence into the vortex core. Based on the ‘a priori’ optimal selected parameters, results of the hotwire wake survey indicated that the actuation achieved a reduction and broadening of the axial velocity deficit. A decrease in the peak tangential velocity associated with an increase in the vortex core radius was reported as a result of the accelerated radial transport of angular momentum. Peak vorticity level near the core was also found to be largely diffused as a direct result of the increased turbulent mixing within the vortex. The wing tip vortex a exhibited a reduced strength and a diffused core as a direct result of increased turbulent mixing due to the presence of turbulent small scale vortices within its core. It is believed that the increased turbulence within the vortex due to the synthetic jet control was the main mechanism associated with the decreased strength and increased size of the wing tip vortex as it evolves downstream. A comparison with a ‘non-optimal’ case was included to demonstrate the effectiveness of selecting the appropriate control parameters. The Synthetic Jet will be operated at various actuation configurations and an extensive parametric study is projected to determine the optimal actuation parameters.

Keywords: flow control, hotwire anemometry, synthetic jet, wing tip vortex

Procedia PDF Downloads 430
324 Virtual Reality for Chemical Engineering Unit Operations

Authors: Swee Kun Yap, Sachin Jangam, Suraj Vasudevan

Abstract:

Experiential learning is dubbed as a highly effective way to enhance learning. Virtual reality (VR) is thus a helpful tool in providing a safe, memorable, and interactive learning environment. A class of 49 fluid mechanics students participated in starting up a pump, one of the most used equipment in the chemical industry, in VR. They experience the process in VR to familiarize themselves with the safety training and the standard operating procedure (SOP) in guided mode. Students subsequently observe their peers (in groups of 4 to 5) complete the same training. The training first brings each user through the personal protection equipment (PPE) selection, before guiding the user through a series of steps for pump startup. One of the most common feedback given by industries include the weakness of our graduates in pump design and operation. Traditional fluid mechanics is a highly theoretical module loaded with engineering equations, providing limited opportunity for visualization and operation. With VR pump, students can now learn to startup, shutdown, troubleshoot and observe the intricacies of a centrifugal pump in a safe and controlled environment, thereby bridging the gap between theory and practical application. Following the completion of the guided mode operation, students then individually complete the VR assessment for pump startup on the same day, which requires students to complete the same series of steps, without any cues given in VR to test their recollection rate. While most students miss out a few minor steps such as the checking of lubrication oil and the closing of minor drain valves before pump priming, all the students scored full marks in the PPE selection, and over 80% of the students were able to complete all the critical steps that are required to startup a pump safely. The students were subsequently tested for their recollection rate by means of an online quiz 3 weeks later, and it is again found that over 80% of the students were able to complete the critical steps in the correct order. In the survey conducted, students reported that the VR experience has been enjoyable and enriching, and 79.5% of the students voted to include VR as a positive supplementary exercise in addition to traditional teaching methods. One of the more notable feedback is the higher ease of noticing and learning from mistakes as an observer rather than as a VR participant. Thus, the cycling between being a VR participant and an observer has helped tremendously in their knowledge retention. This reinforces the positive impact VR has on learning.

Keywords: experiential learning, learning by doing, pump, unit operations, virtual reality

Procedia PDF Downloads 131
323 The Role of Nurses and Midwives’ Self-Government in Postgraduate Education in Poland

Authors: Tomasz Holecki, Hanna Dobrowolska

Abstract:

In the Polish health care system, nurses and midwives are obliged to regularly update their professional knowledge. It is all regulated by the Law on the nurse and midwife’s profession and the code of ethics. The professional self-governing body (County Chamber of Nurses and Midwives) is obliged to organize ongoing training for them so that maintaining accessibility and availability to the high quality of educational services could be possible at all levels of post-graduate education. The aim of this study is an analysis of post-graduate education organized by the County Chamber of Nurses and Midwives in the city of Katowice, Poland, as a professional self-governing body operating in the area of Silesian province inhabited by almost 5 million citizens which bring together more than 30 thousand professionally active nurses and midwives. In the years 2000-2017, the self-government of nurses and midwives trained over 50,000 people. The education and supervision system over the labour of nurses and midwives establishes exercising control by a self-governing body. In practice, this means that conducting activities aimed at creating legal regulations and organizational conditions, as well as the practical implementation of courses, belongs to the professional self-government of nurses and midwives. The most of specialization courses that were provided from their own funds came from membership fees. The biggest group was participants of specializations in the fields of cardiac, anesthesia, and preventive nursing. The smallest group of people participated in such specializations as neonatal, emergency, and obstetrics nursing. The most popular specialist courses were in the fields of the electrocardiogram and cardiopulmonary resuscitation, whereas the least popular were the ones in the fields of protective vaccinations of neonates. So-called 'soft training-courses' in the fields of improvement of social skills and management were also provided. The research shows that a vast majority of nurses and midwives are interested in raising their professional qualifications. Specialist courses and selected fields of qualification courses received the most concrete attention. In light of conducted research, one can assert that cooperation inside the community of nurses and midwives provides access to high-quality education and training services regularly used by a wide circle of them. The presented results exemplify a level of real interest in specialist and qualification training-courses and also show sources of financing them.

Keywords: nurses and midwives, ongoing training, postgraduate education, specialist training-courses

Procedia PDF Downloads 100
322 Public-Private Partnership Projects in Canada: A Case Study Approach

Authors: Samuel Carpintero

Abstract:

Public-private partnerships (PPP) arrangements have emerged all around the world as a response to infrastructure deficits and the need to refurbish existing infrastructure. The motivations of governments for embarking on PPPs for the delivery of public infrastructure are manifold, and include on-time and on-budget delivery as well as access to private project management expertise. The PPP formula has been used by some State governments in United States and Canada, where the participation of private companies in financing and managing infrastructure projects has increased significantly in the last decade, particularly in the transport sector. On the one hand, this paper examines the various ways used in these two countries in the implementation of PPP arrangements, with a particular focus on risk transfer. The examination of risk transfer in this paper is carried out with reference to the following key PPP risk categories: construction risk, revenue risk, operating risk and availability risk. The main difference between both countries is that in Canada the demand risk remains usually within the public sector whereas in the United States this risk is usually transferred to the private concessionaire. The aim is to explore which lessons can be learnt from both models than might be useful for other countries. On the other hand, the paper also analyzes why the Spanish companies have been so successful in winning PPP contracts in North America during the past decade. Contrary to the Latin American PPP market, the Spanish companies do not have any cultural advantage in the case of the United States and Canada. Arguably, some relevant reasons for the success of the Spanish groups are their extensive experience in PPP projects (that dates back to the late 1960s in some cases), their high technical level (that allows them to be aggressive in their bids), and their good position and track-record in the financial markets. The article’s empirical base consists of data provided by official sources of both countries as well as information collected through face-to-face interviews with public and private representatives of the stakeholders participating in some of the PPP schemes. Interviewees include private project managers of the concessionaires, representatives of banks involved as financiers in the projects, and experts in the PPP industry with close knowledge of the North American market. Unstructured in-depth interviews have been adopted as a means of investigation for this study because of its powers to achieve honest and robust responses and to ensure realism in the collection of an overall impression of stakeholders’ perspectives.

Keywords: PPP, concession, infrastructure, construction

Procedia PDF Downloads 292
321 Optimizing Cell Culture Performance in an Ambr15 Microbioreactor Using Dynamic Flux Balance and Computational Fluid Dynamic Modelling

Authors: William Kelly, Sorelle Veigne, Xianhua Li, Zuyi Huang, Shyamsundar Subramanian, Eugene Schaefer

Abstract:

The ambr15™ bioreactor is a single-use microbioreactor for cell line development and process optimization. The ambr system offers fully automatic liquid handling with the possibility of fed-batch operation and automatic control of pH and oxygen delivery. With operating conditions for large scale biopharmaceutical production properly scaled down, micro bioreactors such as the ambr15™ can potentially be used to predict the effect of process changes such as modified media or different cell lines. In this study, gassing rates and dilution rates were varied for a semi-continuous cell culture system in the ambr15™ bioreactor. The corresponding changes to metabolite production and consumption, as well as cell growth rate and therapeutic protein production were measured. Conditions were identified in the ambr15™ bioreactor that produced metabolic shifts and specific metabolic and protein production rates also seen in the corresponding larger (5 liter) scale perfusion process. A Dynamic Flux Balance model was employed to understand and predict the metabolic changes observed. The DFB model-predicted trends observed experimentally, including lower specific glucose consumption when CO₂ was maintained at higher levels (i.e. 100 mm Hg) in the broth. A Computational Fluid Dynamic (CFD) model of the ambr15™ was also developed, to understand transfer of O₂ and CO₂ to the liquid. This CFD model predicted gas-liquid flow in the bioreactor using the ANSYS software. The two-phase flow equations were solved via an Eulerian method, with population balance equations tracking the size of the gas bubbles resulting from breakage and coalescence. Reasonable results were obtained in that the Carbon Dioxide mass transfer coefficient (kLa) and the air hold up increased with higher gas flow rate. Volume-averaged kLa values at 500 RPM increased as the gas flow rate was doubled and matched experimentally determined values. These results form a solid basis for optimizing the ambr15™, using both CFD and FBA modelling approaches together, for use in microscale simulations of larger scale cell culture processes.

Keywords: cell culture, computational fluid dynamics, dynamic flux balance analysis, microbioreactor

Procedia PDF Downloads 272
320 Laser-Dicing Modeling: Implementation of a High Accuracy Tool for Laser-Grooving and Cutting Application

Authors: Jeff Moussodji, Dominique Drouin

Abstract:

The highly complex technology requirements of today’s integrated circuits (ICs), lead to the increased use of several materials types such as metal structures, brittle and porous low-k materials which are used in both front end of line (FEOL) and back end of line (BEOL) process for wafer manufacturing. In order to singulate chip from wafer, a critical laser-grooving process, prior to blade dicing, is used to remove these layers of materials out of the dicing street. The combination of laser-grooving and blade dicing allows to reduce the potential risk of induced mechanical defects such micro-cracks, chipping, on the wafer top surface where circuitry is located. It seems, therefore, essential to have a fundamental understanding of the physics involving laser-dicing in order to maximize control of these critical process and reduce their undesirable effects on process efficiency, quality, and reliability. In this paper, the study was based on the convergence of two approaches, numerical and experimental studies which allowed us to investigate the interaction of a nanosecond pulsed laser and BEOL wafer materials. To evaluate this interaction, several laser grooved samples were compared with finite element modeling, in which three different aspects; phase change, thermo-mechanical and optic sensitive parameters were considered. The mathematical model makes it possible to highlight a groove profile (depth, width, etc.) of a single pulse or multi-pulses on BEOL wafer material. Moreover, the heat affected zone, and thermo-mechanical stress can be also predicted as a function of laser operating parameters (power, frequency, spot size, defocus, speed, etc.). After modeling validation and calibration, a satisfying correlation between experiment and modeling, results have been observed in terms of groove depth, width and heat affected zone. The study proposed in this work is a first step toward implementing a quick assessment tool for design and debug of multiple laser grooving conditions with limited experiments on hardware in industrial application. More correlations and validation tests are in progress and will be included in the full paper.

Keywords: laser-dicing, nano-second pulsed laser, wafer multi-stack, multiphysics modeling

Procedia PDF Downloads 205
319 Evaluation of Physical Parameters and in-Vitro and in-Vivo Antidiabetic Activity of a Selected Combined Medicinal Plant Extracts Mixture

Authors: S. N. T. I. Sampath, J. M. S. Jayasinghe, A. P. Attanayake, V. Karunaratne

Abstract:

Diabetes mellitus is one of the major public health posers throughout the world today that incidence and associated with increasing mortality. Insufficient regulation of the blood glucose level might be serious effects for health and its necessity to identify new therapeutics that have ability to reduce hyperglycaemic condition in the human body. Even though synthetic antidiabetic drugs are more effective to control diabetes mellitus, there are considerable side effects have been reported. Thus, there is an increasing demand for searching new natural products having high antidiabetic activity with lesser side effects. The purposes of the present study were to evaluate different physical parameters and in-vitro and in-vivo antidiabetic potential of the selected combined medicinal plant extracts mixture composed of leaves of Murraya koenigii, cloves of Allium sativum, fruits of Garcinia queasita and seeds of Piper nigrum. The selected plants parts were mixed and ground together and extracted sequentially into the hexane, ethyl acetate and methanol. Solvents were evaporated and they were further dried by freeze-drying to obtain a fine powder of each extract. Various physical parameters such as moisture, total ash, acid insoluble ash and water soluble ash were evaluated using standard test procedures. In-vitro antidiabetic activity of combined plant extracts mixture was screened using enzyme assays such as α-amylase inhibition assay and α-glucosidase inhibition assay. The acute anti-hyperglycaemic activity was performed using oral glucose tolerance test for the streptozotocin induced diabetic Wistar rats to find out in-vivo antidiabetic activity of combined plant extracts mixture and it was assessed through total oral glucose tolerance curve (TAUC) values. The percentage of moisture content, total ash content, acid insoluble ash content and water soluble ash content were ranged of 7.6-17.8, 8.1-11.78, 0.019-0.134 and 6.2-9.2 respectively for the plant extracts and those values were less than standard values except the methanol extract. The hexane and ethyl acetate extracts exhibited highest α-amylase (IC50 = 25.7 ±0.6; 27.1 ±1.2 ppm) and α-glucosidase (IC50 = 22.4 ±0.1; 33.7 ±0.2 ppm) inhibitory activities than methanol extract (IC50 = 360.2 ±0.6; 179.6 ±0.9 ppm) when compared with the acarbose positive control (IC50 = 5.7 ±0.4; 17.1 ±0.6 ppm). The TAUC values for hexane, ethyl acetate, and methanol extracts and glibenclamide (positive control) treated rats were 8.01 ±0.66; 8.05 ±1.07; 8.40±0.50; 5.87 ±0.93 mmol/L.h respectively, whereas in diabetic control rats the TAUC value was 13.22 ±1.07 mmol/L.h. Administration of plant extracts treated rats significantly suppressed (p<0.05) the rise in plasma blood glucose levels compared to control rats but less significant than glibenclamide. The obtained results from in-vivo and in-vitro antidiabetic study showed that the hexane and ethyl acetate extracts of selected combined plant mixture might be considered as a potential source to isolate natural antidiabetic agents and physical parameters of hexane and ethyl acetate extracts will helpful to develop antidiabetic drug with further standardize properties.

Keywords: diabetes mellitus, in-vitro antidiabetic assays, medicinal plants, standardization

Procedia PDF Downloads 125
318 A Broadband Tri-Cantilever Vibration Energy Harvester with Magnetic Oscillator

Authors: Xiaobo Rui, Zhoumo Zeng, Yibo Li

Abstract:

A novel tri-cantilever energy harvester with magnetic oscillator was presented, which could convert the ambient vibration into electrical energy to power the low-power devices such as wireless sensor networks. The most common way to harvest vibration energy is based on the use of linear resonant devices such as cantilever beam, since this structure creates the highest strain for a given force. The highest efficiency will be achieved when the resonance frequency of the harvester matches the vibration frequency. The limitation of the structure is the narrow effective bandwidth. To overcome this limitation, this article introduces a broadband tri-cantilever harvester with nonlinear stiffness. This energy harvester typically consists of three thin cantilever beams vertically arranged with Neodymium Magnets ( NdFeB)magnetics at its free end and a fixed base at the other end. The three cantilevers have different resonant frequencies by designed in different thicknesses. It is obviously that a similar advantage of multiple resonant frequencies as piezoelectric cantilevers array structure is built. To achieve broadband energy harvesting, magnetic interaction is used to introduce the nonlinear system stiffness to tune the resonant frequency to match the excitation. Since the three cantilever tips are all free and the magnetic force is distance dependent, the resonant frequencies will be complexly changed with the vertical vibration of the free end. Both model and experiment are built. The electromechanically coupled lumped-parameter model is presented. An electromechanical formulation and analytical expressions for the coupled nonlinear vibration response and voltage response are given. The entire structure is fabricated and mechanically attached to a electromagnetic shaker as a vibrating body via the fixed base, in order to couple the vibrations to the cantilever. The cantilevers are bonded with piezoelectric macro-fiber composite (MFC) materials (Model: M8514P2). The size of the cantilevers is 120*20mm2 and the thicknesses are separately 1mm, 0.8mm, 0.6mm. The prototype generator has a measured performance of 160.98 mW effective electrical power and 7.93 DC output voltage via the excitation level of 10m/s2. The 130% increase in the operating bandwidth is achieved. This device is promising to support low-power devices, peer-to-peer wireless nodes, and small-scale wireless sensor networks in ambient vibration environment.

Keywords: tri-cantilever, ambient vibration, energy harvesting, magnetic oscillator

Procedia PDF Downloads 151
317 Knowledge Management Barriers: A Statistical Study of Hardware Development Engineering Teams within Restricted Environments

Authors: Nicholas S. Norbert Jr., John E. Bischoff, Christopher J. Willy

Abstract:

Knowledge Management (KM) is globally recognized as a crucial element in securing competitive advantage through building and maintaining organizational memory, codifying and protecting intellectual capital and business intelligence, and providing mechanisms for collaboration and innovation. KM frameworks and approaches have been developed and defined identifying critical success factors for conducting KM within numerous industries ranging from scientific to business, and for ranges of organization scales from small groups to large enterprises. However, engineering and technical teams operating within restricted environments are subject to unique barriers and KM challenges which cannot be directly treated using the approaches and tools prescribed for other industries. This research identifies barriers in conducting KM within Hardware Development Engineering (HDE) teams and statistically compares significance to barriers upholding the four KM pillars of organization, technology, leadership, and learning for HDE teams. HDE teams suffer from restrictions in knowledge sharing (KS) due to classification of information (national security risks), customer proprietary restrictions (non-disclosure agreement execution for designs), types of knowledge, complexity of knowledge to be shared, and knowledge seeker expertise. As KM evolved leveraging information technology (IT) and web-based tools and approaches from Web 1.0 to Enterprise 2.0, KM may also seek to leverage emergent tools and analytics including expert locators and hybrid recommender systems to enable KS across barriers of the technical teams. The research will test hypothesis statistically evaluating if KM barriers for HDE teams affect the general set of expected benefits of a KM System identified through previous research. If correlations may be identified, then generalizations of success factors and approaches may also be garnered for HDE teams. Expert elicitation will be conducted using a questionnaire hosted on the internet and delivered to a panel of experts including engineering managers, principal and lead engineers, senior systems engineers, and knowledge management experts. The feedback to the questionnaire will be processed using analysis of variance (ANOVA) to identify and rank statistically significant barriers of HDE teams within the four KM pillars. Subsequently, KM approaches will be recommended for upholding the KM pillars within restricted environments of HDE teams.

Keywords: engineering management, knowledge barriers, knowledge management, knowledge sharing

Procedia PDF Downloads 269
316 Ultra-Wideband Antennas for Ultra-Wideband Communication and Sensing Systems

Authors: Meng Miao, Jeongwoo Han, Cam Nguyen

Abstract:

Ultra-wideband (UWB) time-domain impulse communication and radar systems use ultra-short duration pulses in the sub-nanosecond regime, instead of continuous sinusoidal waves, to transmit information. The pulse directly generates a very wide-band instantaneous signal with various duty cycles depending on specific usages. In UWB systems, the total transmitted power is spread over an extremely wide range of frequencies; the power spectral density is extremely low. This effectively results in extremely small interference to other radio signals while maintains excellent immunity to interference from these signals. UWB devices can therefore work within frequencies already allocated for other radio services, thus helping to maximize this dwindling resource. Therefore, impulse UWB technique is attractive for realizing high-data-rate, short-range communications, ground penetrating radar (GPR), and military radar with relatively low emission power levels. UWB antennas are the key element dictating the transmitted and received pulse shape and amplitude in both time and frequency domain. They should have good impulse response with minimal distortion. To facilitate integration with transmitters and receivers employing microwave integrated circuits, UWB antennas enabling direct integration are preferred. We present the development of two UWB antennas operating from 3.1 to 10.6 GHz and 0.3-6 GHz for UWB systems that provide direct integration with microwave integrated circuits. The operation of these antennas is based on the principle of wave propagation on a non-uniform transmission line. Time-domain EM simulation is conducted to optimize the antenna structures to minimize reflections occurring at the open-end transition. Calculated and measured results of these UWB antennas are presented in both frequency and time domains. The antennas have good time-domain responses. They can transmit and receive pulses effectively with minimum distortion, little ringing, and small reflection, clearly demonstrating the signal fidelity of the antennas in reproducing the waveform of UWB signals which is critical for UWB sensors and communication systems. Good performance together with seamless microwave integrated-circuit integration makes these antennas good candidates not only for UWB applications but also for integration with printed-circuit UWB transmitters and receivers.

Keywords: antennas, ultra-wideband, UWB, UWB communication systems, UWB radar systems

Procedia PDF Downloads 229
315 Leadership Styles and Adoption of Risk Governance in Insurance and Energy Industry: A Comparative Case Study

Authors: Ruchi Agarwal

Abstract:

In today’s world, companies are operating in dynamic, uncertain and ambiguous business environments. Globally, more companies are failing due to Environmental, Social and Governance (ESG) factors than ever. Corporate governance and risk management are intertwined in nature. For decades, corporate governance and risk management have been influenced by internal and external factors. Three schools of thought have influenced risk governance for decades: Agency theory, Contingency theory, and Institutional theory. Agency theory argues that agents have interests conflicting with principal interests and the information problem. Contingency theory suggests that risk management adoption is influenced by internal and external factors, while Institutional theory suggests that organizations legitimize risk management with regulators, competitors, and professional bodies. The conflicting objectives of theories have created problems for executives in organizations in the adoption of Risk Governance. So far, there are many studies that discussed risk culture and the role of actors in risk governance, but there are rare studies discussing the role of risk culture in the adoption of risk governance from a leadership style perspective. This study explores the adoption of risk governance in two contrasting industries, such as the Insurance and energy business, to understand whether risk governance is influenced by internal/external factors or whether risk culture is influenced by leaders. We draw empirical evidence by comparing the cases of an Indian insurance company and a renewable energy-based firm in India. We interviewed more than 20 senior executives of companies and collected annual reports, risk management policies, and more than 10 PPTs and other reports from 2017 to 2024. We visited the company for follow-up questions several times. The findings of my research revealed that both companies have used risk governance for strategic renewal of the company. Insurance companies use a transactional leadership style based on performance and reward for improving risk, while energy companies use rather symbolic management to make debt restructuring meaningful for stakeholders. Overall, both companies turned from loss-making to profitable ones in a few years. This comparative study highlights the role of different leadership styles in the adoption of risk governance. The study is also distinct as previous research rarely studied risk governance in two contrasting industries in reference to leadership styles.

Keywords: leadership style, corporate governance, risk management, risk culture, strategic renewal

Procedia PDF Downloads 39
314 Component Test of Martensitic/Ferritic Steels and Nickel-Based Alloys and Their Welded Joints under Creep and Thermo-Mechanical Fatigue Loading

Authors: Daniel Osorio, Andreas Klenk, Stefan Weihe, Andreas Kopp, Frank Rödiger

Abstract:

Future power plants currently face high design requirements due to worsening climate change and environmental restrictions, which demand high operational flexibility, superior thermal performance, minimal emissions, and higher cyclic capability. The aim of the paper is, therefore, to investigate the creep and thermo-mechanical material behavior of improved materials experimentally and welded joints at component scale under near-to-service operating conditions, which are promising for application in highly efficient and flexible future power plants. These materials promise an increase in flexibility and a reduction in manufacturing costs by providing enhanced creep strength and, therefore, the possibility for wall thickness reduction. At the temperature range between 550°C and 625°C, the investigation focuses on the in-phase thermo-mechanical fatigue behavior of dissimilar welded joints of conventional materials (ferritic and martensitic material T24 and T92) to nickel-based alloys (A617B and HR6W) by means of membrane test panels. The temperature and external load are varied in phase during the test, while the internal pressure remains constant. At the temperature range between 650°C and 750°C, it focuses on the creep behavior under multiaxial stress loading of similar and dissimilar welded joints of high temperature resistant nickel-based alloys (A740H, A617B, and HR6W) by means of a thick-walled-component test. In this case, the temperature, the external axial load, and the internal pressure remain constant during testing. Numerical simulations are used for the estimation of the axial component load in order to induce a meaningful damage evolution without causing a total component failure. Metallographic investigations after testing will provide support for understanding the damage mechanism and the influence of the thermo-mechanical load and multiaxiality on the microstructure change and on the creep and TMF- strength.

Keywords: creep, creep-fatigue, component behaviour, weld joints, high temperature material behaviour, nickel-alloys, high temperature resistant steels

Procedia PDF Downloads 113
313 Availability Analysis of Process Management in the Equipment Maintenance and Repair Implementation

Authors: Onur Ozveri, Korkut Karabag, Cagri Keles

Abstract:

It is an important issue that the occurring of production downtime and repair costs when machines fail in the machine intensive production industries. In the case of failure of more than one machine at the same time, which machines will have the priority to repair, how to determine the optimal repair time should be allotted for this machines and how to plan the resources needed to repair are the key issues. In recent years, Business Process Management (BPM) technique, bring effective solutions to different problems in business. The main feature of this technique is that it can improve the way the job done by examining in detail the works of interest. In the industries, maintenance and repair works are operating as a process and when a breakdown occurs, it is known that the repair work is carried out in a series of process. Maintenance main-process and repair sub-process are evaluated with process management technique, so it is thought that structure could bring a solution. For this reason, in an international manufacturing company, this issue discussed and has tried to develop a proposal for a solution. The purpose of this study is the implementation of maintenance and repair works which is integrated with process management technique and at the end of implementation, analyzing the maintenance related parameters like quality, cost, time, safety and spare part. The international firm that carried out the application operates in a free region in Turkey and its core business area is producing original equipment technologies, vehicle electrical construction, electronics, safety and thermal systems for the world's leading light and heavy vehicle manufacturers. In the firm primarily, a project team has been established. The team dealt with the current maintenance process again, and it has been revised again by the process management techniques. Repair process which is sub-process of maintenance process has been discussed again. In the improved processes, the ABC equipment classification technique was used to decide which machine or machines will be given priority in case of failure. This technique is a prioritization method of malfunctioned machine based on the effect of the production, product quality, maintenance costs and job security. Improved maintenance and repair processes have been implemented in the company for three months, and the obtained data were compared with the previous year data. In conclusion, breakdown maintenance was found to occur in a shorter time, with lower cost and lower spare parts inventory.

Keywords: ABC equipment classification, business process management (BPM), maintenance, repair performance

Procedia PDF Downloads 189
312 Reaching New Levels: Using Systems Thinking to Analyse a Major Incident Investigation

Authors: Matthew J. I. Woolley, Gemma J. M. Read, Paul M. Salmon, Natassia Goode

Abstract:

The significance of high consequence, workplace failures within construction continues to resonate with a combined average of 12 fatal incidents occurring daily throughout Australia, the United Kingdom, and the United States. Within the Australian construction domain, more than 35 serious, compensable injury incidents are reported daily. These alarming figures, in conjunction with the continued occurrence of fatal and serious, occupational injury incidents globally suggest existing approaches to incident analysis may not be achieving required injury prevention outcomes. One reason may be that, incident analysis methods used in construction have not kept pace with advances in the field of safety science and are not uncovering the full range system-wide contributory factors that are required to achieve optimal levels of construction safety performance. Another reason underpinning this global issue may also be the absence of information surrounding the construction operating and project delivery system. For example, it is not clear who shares the responsibility for construction safety in different contexts. To respond to this issue, to the author’s best knowledge, a first of its kind, control structure model of the construction industry is presented and then used to analyse a fatal construction incident. The model was developed by applying and extending the Systems Theoretic and Incident Model and Process method to hierarchically represent the actors, constraints, feedback mechanisms, and relationships that are involved in managing construction safety performance. The Causal Analysis based on Systems Theory (CAST) method was then used to identify the control and feedback failures involved in the fatal incident. The conclusions from the Coronial investigation into the event are compared with the findings stemming from the CAST analysis. The CAST analysis highlighted additional issues across the construction system that were not identified in the coroner’s recommendations, suggested there is a potential benefit in applying a systems theory approach to incident analysis in construction. The findings demonstrate the utility applying systems theory-based methods to the analysis of construction incidents. Specifically, this study shows the utility of the construction control structure and the potential benefits for project leaders, construction entities, regulators, and construction clients in controlling construction performance.

Keywords: construction project management, construction performance, incident analysis, systems thinking

Procedia PDF Downloads 124
311 The Impact of COVID-19 on Italian Tourism: the Current Scenario, Opportunity and Future Tourism Organizational Strategies

Authors: Marco Camilli

Abstract:

This article examines the impact of the pandemic outbreak of COVID-19 in the tourism sector in Italy, analyzing the current scenario, the government decisions and the private company reaction for the summer season 2020. The framework of the data analyzed shows how massive it’s the impact of the pandemic outbreak in the tourism revenue, and the weaknesses of the measures proposed. Keywords Travel &Tourism, Transportation, Sustainability, COVID-19, Businesses Introduction The current COVID-19 scenario shows a shocking situation for the tourism and transportation sectors: it could be the most affected by the Coronavirus in Italy. According to forecasts, depending on the duration of the epidemic outbreak and the lockdown strategy applied by the Government, businesses in the supply chain could lose between 24 and 66 billion in turnover in the period of 2020-21, with huge diversified impacts at the national and regional level. Many tourist companies are on the verge of survival and if there are no massive measures by the government they risk closure. Data analysis The tourism and transport sector could be among the sectors most damaged by Covid-19 in Italy. Considering the two-year period 2020-21, companies operating in the travel & tourism sector (Tour operator, Travel Agencies, Hotel, Guides, Bus Company, etc..) could in suffer losses in revenues of 24 to 64 billion euros, especially in the sectors such as the travel agencies, hotel and rental. According to Statista Research Department, from April 2020 estimated that the coronavirus (COVID-19) pandemic will have a significant impact on revenues of the tourism industry in Italy. Revenues are expected to decrease by over 40 billion euros in the first semester of 2020, compared to the same period of the previous year. According to the study, hotel and non-hotel accommodations will experience the highest loss. Revenues of this sector are expected to decrease by 13 billion euros compared to the first semester of 2019 when accommodations registered revenues for about 17 billion euros. According to Statista.com, in 2020, Italy is expected to register a decrease of roughly 28.5 million tourist arrivals due to the impact of coronavirus (COVID-19) on the country's tourist sector. According to the estimate, the region of Veneto will record the highest drop with a decrease of roughly 4.61 million arrivals. Similarly, Lombardy is expected to register a decrease of about 3.87 million arrivals in 2020.

Keywords: travel and tourism, sustainability, COVID-19, businesses, transportation

Procedia PDF Downloads 195
310 Enhancing Seismic Resilience in Urban Environments

Authors: Beatriz González-rodrigo, Diego Hidalgo-leiva, Omar Flores, Claudia Germoso, Maribel Jiménez-martínez, Laura Navas-sánchez, Belén Orta, Nicola Tarque, Orlando Hernández- Rubio, Miguel Marchamalo, Juan Gregorio Rejas, Belén Benito-oterino

Abstract:

Cities facing seismic hazard necessitate detailed risk assessments for effective urban planning and vulnerability identification, ensuring the safety and sustainability of urban infrastructure. Comprehensive studies involving seismic hazard, vulnerability, and exposure evaluations are pivotal for estimating potential losses and guiding proactive measures against seismic events. However, broad-scale traditional risk studies limit consideration of specific local threats and identify vulnerable housing within a structural typology. Achieving precise results at neighbourhood levels demands higher resolution seismic hazard exposure, and vulnerability studies. This research aims to bolster sustainability and safety against seismic disasters in three Central American and Caribbean capitals. It integrates geospatial techniques and artificial intelligence into seismic risk studies, proposing cost-effective methods for exposure data collection and damage prediction. The methodology relies on prior seismic threat studies in pilot zones, utilizing existing exposure and vulnerability data in the region. Emphasizing detailed building attributes enables the consideration of behaviour modifiers affecting seismic response. The approach aims to generate detailed risk scenarios, facilitating prioritization of preventive actions pre-, during, and post-seismic events, enhancing decision-making certainty. Detailed risk scenarios necessitate substantial investment in fieldwork, training, research, and methodology development. Regional cooperation becomes crucial given similar seismic threats, urban planning, and construction systems among involved countries. The outcomes hold significance for emergency planning and national and regional construction regulations. The success of this methodology depends on cooperation, investment, and innovative approaches, offering insights and lessons applicable to regions facing moderate seismic threats with vulnerable constructions. Thus, this framework aims to fortify resilience in seismic-prone areas and serves as a reference for global urban planning and disaster management strategies. In conclusion, this research proposes a comprehensive framework for seismic risk assessment in high-risk urban areas, emphasizing detailed studies at finer resolutions for precise vulnerability evaluations. The approach integrates regional cooperation, geospatial technologies, and adaptive fragility curve adjustments to enhance risk assessment accuracy, guiding effective mitigation strategies and emergency management plans.

Keywords: assessment, behaviour modifiers, emergency management, mitigation strategies, resilience, vulnerability

Procedia PDF Downloads 62