Search results for: input variable disposition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4224

Search results for: input variable disposition

264 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.

Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems

Procedia PDF Downloads 103
263 Religiosity and Involvement in Purchasing Convenience Foods: Using Two-Step Cluster Analysis to Identify Heterogenous Muslim Consumers in the UK

Authors: Aisha Ijaz

Abstract:

The paper focuses on the impact of Muslim religiosity on convenience food purchases and involvement experienced in a non-Muslim culture. There is a scarcity of research on the purchasing patterns of Muslim diaspora communities residing in risk societies, particularly in contexts where there is an increasing inclination toward industrialized food items alongside a renewed interest in the concept of natural foods. The United Kingdom serves as an appropriate setting for this study due to the increasing Muslim population in the country, paralleled by the expanding Halal Food Market. A multi-dimensional framework is proposed, testing for five forms of involvement, specifically Purchase Decision Involvement, Product Involvement, Behavioural Involvement, Intrinsic Risk and Extrinsic Risk. Quantitative cross-sectional consumer data were collected through a face-to-face survey contact method with 141 Muslims during the summer of 2020 in Liverpool located in the Northwest of England. proportion formula was utilitsed, and the population of interest was stratified by gender and age before recruitment took place through local mosques and community centers. Six input variables were used (intrinsic religiosity and involvement dimensions), dividing the sample into 4 clusters using the Two-Step Cluster Analysis procedure in SPSS. Nuanced variances were observed in the type of involvement experienced by religiosity group, which influences behaviour when purchasing convenience food. Four distinct market segments were identified: highly religious ego-involving (39.7%), less religious active (26.2%), highly religious unaware (16.3%), less religious concerned (17.7%). These segments differ significantly with respects to their involvement, behavioural variables (place of purchase and information sources used), socio-cultural (acculturation and social class), and individual characteristics. Choosing the appropriate convenience food is centrally related to the value system of highly religious ego-involving first-generation Muslims, which explains their preference for shopping at ethnic food stores. Less religious active consumers are older and highly alert in information processing to make the optimal food choice, relying heavily on product label sources. Highly religious unaware Muslims are less dietary acculturated to the UK diet and tend to rely on digital and expert advice sources. The less-religious concerned segment, who are typified by younger age and third generation, are engaged with the purchase process because they are worried about making unsuitable food choices. Research implications are outlined and potential avenues for further explorations are identified.

Keywords: consumer behaviour, consumption, convenience food, religion, muslims, UK

Procedia PDF Downloads 29
262 Effects of Radiation on Mixed Convection in Power Law Fluids along Vertical Wedge Embedded in a Saturated Porous Medium under Prescribed Surface Heat Flux Condition

Authors: Qaisar Ali, Waqar A. Khan, Shafiq R. Qureshi

Abstract:

Heat transfer in Power Law Fluids across cylindrical surfaces has copious engineering applications. These applications comprises of areas such as underwater pollution, bio medical engineering, filtration systems, chemical, petroleum, polymer, food processing, recovery of geothermal energy, crude oil extraction, pharmaceutical and thermal energy storage. The quantum of research work with diversified conditions to study the effects of combined heat transfer and fluid flow across porous media has increased considerably over last few decades. The most non-Newtonian fluids of practical interest are highly viscous and therefore are often processed in the laminar flow regime. Several studies have been performed to investigate the effects of free and mixed convection in Newtonian fluids along vertical and horizontal cylinder embedded in a saturated porous medium, whereas very few analysis have been performed on Power law fluids along wedge. In this study, boundary layer analysis under the effects of radiation-mixed convection in power law fluids along vertical wedge in porous medium have been investigated using an implicit finite difference method (Keller box method). Steady, 2-D laminar flow has been considered under prescribed surface heat flux condition. Darcy, Boussinesq and Roseland approximations are assumed to be valid. Neglecting viscous dissipation effects and the radiate heat flux in the flow direction, the boundary layer equations governing mixed convection flow over a vertical wedge are transformed into dimensionless form. The single mathematical model represents the case for vertical wedge, cone and plate by introducing the geometry parameter. Both similar and Non- similar solutions have been obtained and results for Non similar case have been presented/ plotted. Effects of radiation parameter, variable heat flux parameter, wedge angle parameter ‘m’ and mixed convection parameter have been studied for both Newtonian and Non-Newtonian fluids. The results are also compared with the available data for the analysis of heat transfer in the prescribed range of parameters and found in good agreement. Results for the details of dimensionless local Nusselt number, temperature and velocity fields have also been presented for both Newtonian and Non-Newtonian fluids. Analysis of data revealed that as the radiation parameter or wedge angle is increased, the Nusselt number decreases whereas it increases with increase in the value of heat flux parameter at a given value of mixed convection parameter. Also, it is observed that as viscosity increases, the skin friction co-efficient increases which tends to reduce the velocity. Moreover, pseudo plastic fluids are more heat conductive than Newtonian and dilatant fluids respectively. All fluids behave identically in pure forced convection domain.

Keywords: porous medium, power law fluids, surface heat flux, vertical wedge

Procedia PDF Downloads 281
261 Intermodal Strategies for Redistribution of Agrifood Products in the EU: The Case of Vegetable Supply Chain from Southeast of Spain

Authors: Juan C. Pérez-Mesa, Emilio Galdeano-Gómez, Jerónimo De Burgos-Jiménez, José F. Bienvenido-Bárcena, José F. Jiménez-Guerrero

Abstract:

Environmental cost and transport congestion on roads resulting from product distribution in Europe have to lead to the creation of various programs and studies seeking to reduce these negative impacts. In this regard, apart from other institutions, the European Commission (EC) has designed plans in recent years promoting a more sustainable transportation model in an attempt to ultimately shift traffic from the road to the sea by using intermodality to achieve a model rebalancing. This issue proves especially relevant in supply chains from peripheral areas of the continent, where the supply of certain agrifood products is high. In such cases, the most difficult challenge is managing perishable goods. This study focuses on new approaches that strengthen the modal shift, as well as the reduction of externalities. This problem is analyzed by attempting to promote intermodal system (truck and short sea shipping) for transport, taking as point of reference highly perishable products (vegetables) exported from southeast Spain, which is the leading supplier to Europe. Methodologically, this paper seeks to contribute to the literature by proposing a different and complementary approach to establish a comparison between intermodal and the “only road” alternative. For this purpose, the multicriteria decision is utilized in a p-median model (P-M) adapted to the transport of perishables and to a means of shipping selection problem, which must consider different variables: transit cost, including externalities, time, and frequency (including agile response time). This scheme avoids bias in decision-making processes. By observing the results, it can be seen that the influence of the externalities as drivers of the modal shift is reduced when transit time is introduced as a decision variable. These findings confirm that the general strategies, those of the EC, based on environmental benefits lose their capacity for implementation when they are applied to complex circumstances. In general, the different estimations reveal that, in the case of perishables, intermodality would be a secondary and viable option only for very specific destinations (for example, Hamburg and nearby locations, the area of influence of London, Paris, and the Netherlands). Based on this framework, the general outlook on this subject should be modified. Perhaps the government should promote specific business strategies based on new trends in the supply chain, not only on the reduction of externalities, and find new approaches that strengthen the modal shift. A possible option is to redefine ports, conceptualizing them as digitalized redistribution and coordination centers and not only as areas of cargo exchange.

Keywords: environmental externalities, intermodal transport, perishable food, transit time

Procedia PDF Downloads 72
260 Lifespan Assessment of the Fish Crossing System of Itaipu Power Plant (Brazil/Paraguay) Based on the Reaching of Its Sedimentological Equilibrium Computed by 3D Modeling and Churchill Trapping Efficiency

Authors: Anderson Braga Mendes, Wallington Felipe de Almeida, Cicero Medeiros da Silva

Abstract:

This study aimed to assess the lifespan of the fish transposition system of the Itaipu Power Plant (Brazil/Paraguay) by using 3D hydrodynamic modeling and Churchill trapping effiency in order to identify the sedimentological equilibrium configuration in the main pond of the Piracema Channel, which is part of a 10 km hydraulic circuit that enables fish migration from downstream to upstream (and vice-versa) the Itaipu Dam, overcoming a 120 m water drop. For that, bottom data from 2002 (its opening year) and 2015 were collected and analyzed, besides bed material at 12 stations to the purpose of identifying their granulometric profiles. The Shields and Yalin and Karahan diagrams for initiation of motion of bed material were used to determine the critical bed shear stress for the sedimentological equilibrium state based on the sort of sediment (grain size) to be found at the bottom once the balance is reached. Such granulometry was inferred by analyzing the grosser material (fine and medium sands) which inflows the pond and deposits in its backwater zone, being adopted a range of diameters within the upper and lower limits of that sand stratification. The software Delft 3D was used in an attempt to compute the bed shear stress at every station under analysis. By modifying the input bathymetry of the main pond of the Piracema Channel so as to the computed bed shear stress at each station fell within the intervals of acceptable critical stresses simultaneously, it was possible to foresee the bed configuration of the main pond when the sedimentological equilibrium is reached. Under such condition, 97% of the whole pond capacity will be silted, and a shallow water course with depths ranging from 0.2 m to 1.5 m will be formed; in 2002, depths ranged from 2 m to 10 m. Out of that water path, the new bottom will be practically flat and covered by a layer of water 0.05 m thick. Thus, in the future the main pond of the Piracema Channel will lack its purpose of providing a resting place for migrating fish species, added to the fact that it may become an insurmountable barrier for medium and large sized specimens. Everything considered, it was estimated that its lifespan, from the year of its opening to the moment of the sedimentological equilibrium configuration, will be approximately 95 years–almost half of the computed lifespan of Itaipu Power Plant itself. However, it is worth mentioning that drawbacks concerning the silting in the main pond will start being noticed much earlier than such time interval owing to the reasons previously mentioned.

Keywords: 3D hydrodynamic modeling, Churchill trapping efficiency, fish crossing system, Itaipu power plant, lifespan, sedimentological equilibrium

Procedia PDF Downloads 212
259 Teaching Academic Writing for Publication: A Liminal Threshold Experience Towards Development of Scholarly Identity

Authors: Belinda du Plooy, Ruth Albertyn, Christel Troskie-De Bruin, Ella Belcher

Abstract:

In the academy, scholarliness or intellectual craftsmanship is considered the highest level of achievement, culminating in being consistently successfully published in impactful, peer-reviewed journals and books. Scholarliness implies rigorous methods, systematic exposition, in-depth analysis and evaluation, and the highest level of critical engagement and reflexivity. However, being a scholar does not happen automatically when one becomes an academic or completes graduate studies. A graduate qualification is an indication of one’s level of research competence but does not necessarily prepare one for the type of scholarly writing for publication required after a postgraduate qualification has been conferred. Scholarly writing for publication requires a high-level skillset and a specific mindset, which must be intentionally developed. The rite of passage to become a scholar is an iterative process with liminal spaces, thresholds, transitions, and transformations. The journey from researcher to published author is often fraught with rejection, insecurity, and disappointment and requires resilience and tenacity from those who eventually triumph. It cannot be achieved without support, guidance, and mentorship. In this article, the authors use collective auto-ethnography (CAE) to describe the phases and types of liminality encountered during the liminal journey toward scholarship. The authors speak as long-time facilitators of Writing for Academic Publication (WfAP) capacity development events (training workshops and writing retreats) presented at South African universities. Their WfAP facilitation practice is structured around experiential learning principles that allow them to act as critical reading partners and reflective witnesses for the writer-participants of their WfAP events. They identify three essential facilitation features for the effective holding of a generative, liminal, and transformational writing space for novice academic writers in order to enable their safe passage through the various liminal spaces they encounter during their scholarly development journey. These features are that facilitators should be agents of disruption and liminality while also guiding writers through these liminal spaces; that there should be a sense of mutual trust and respect, shared responsibility and accountability in order for writers to produce publication-worthy scholarly work; and that this can only be accomplished with the continued application of high levels of sensitivity and discernment by WfAP facilitators. These are key features for successful WfAP scholarship training events, where focused, individual input triggers personal and professional transformational experiences, which in turn translate into high-quality scholarly outputs.

Keywords: academic writing, liminality, scholarship, scholarliness, threshold experience, writing for publication

Procedia PDF Downloads 21
258 The Properties of Risk-based Approaches to Asset Allocation Using Combined Metrics of Portfolio Volatility and Kurtosis: Theoretical and Empirical Analysis

Authors: Maria Debora Braga, Luigi Riso, Maria Grazia Zoia

Abstract:

Risk-based approaches to asset allocation are portfolio construction methods that do not rely on the input of expected returns for the asset classes in the investment universe and only use risk information. They include the Minimum Variance Strategy (MV strategy), the traditional (volatility-based) Risk Parity Strategy (SRP strategy), the Most Diversified Portfolio Strategy (MDP strategy) and, for many, the Equally Weighted Strategy (EW strategy). All the mentioned approaches were based on portfolio volatility as a reference risk measure but in 2023, the Kurtosis-based Risk Parity strategy (KRP strategy) and the Minimum Kurtosis strategy (MK strategy) were introduced. Understandably, they used the fourth root of the portfolio-fourth moment as a proxy for portfolio kurtosis to work with a homogeneous function of degree one. This paper contributes mainly theoretically and methodologically to the framework of risk-based asset allocation approaches with two steps forward. First, a new and more flexible objective function considering a linear combination (with positive coefficients that sum to one) of portfolio volatility and portfolio kurtosis is used to alternatively serve a risk minimization goal or a homogeneous risk distribution goal. Hence, the new basic idea consists in extending the achievement of typical risk-based approaches’ goals to a combined risk measure. To give the rationale behind operating with such a risk measure, it is worth remembering that volatility and kurtosis are expressions of uncertainty, to be read as dispersion of returns around the mean and that both preserve adherence to a symmetric framework and consideration for the entire returns distribution as well, but also that they differ from each other in that the former captures the “normal” / “ordinary” dispersion of returns, while the latter is able to catch the huge dispersion. Therefore, the combined risk metric that uses two individual metrics focused on the same phenomena but differently sensitive to its intensity allows the asset manager to express, in the context of an objective function by varying the “relevance coefficient” associated with the individual metrics, alternatively, a wide set of plausible investment goals for the portfolio construction process while serving investors differently concerned with tail risk and traditional risk. Since this is the first study that also implements risk-based approaches using a combined risk measure, it becomes of fundamental importance to investigate the portfolio effects triggered by this innovation. The paper also offers a second contribution. Until the recent advent of the MK strategy and the KRP strategy, efforts to highlight interesting properties of risk-based approaches were inevitably directed towards the traditional MV strategy and SRP strategy. Previous literature established an increasing order in terms of portfolio volatility, starting from the MV strategy, through the SRP strategy, arriving at the EQ strategy and provided the mathematical proof for the “equalization effect” concerning marginal risks when the MV strategy is considered, and concerning risk contributions when the SRP strategy is considered. Regarding the validity of similar conclusions when referring to the MK strategy and KRP strategy, the development of a theoretical demonstration is still pending. This paper fills this gap.

Keywords: risk parity, portfolio kurtosis, risk diversification, asset allocation

Procedia PDF Downloads 41
257 Association between Obstetric Factors with Affected Areas of Health-Related Quality of Life of Pregnant Women

Authors: Cinthia G. P. Calou, Franz J. Antezana, Ana I. O. Nicolau, Eveliny S. Martins, Paula R. A. L. Soares, Glauberto S. Quirino, Dayanne R. Oliveira, Priscila S. Aquino, Régia C. M. B. Castro, Ana K. B. Pinheiro

Abstract:

Introduction: As an integral part of the health-disease process, gestation is a period in which the social insertion of women can influence, in a positive or negative way, the course of the pregnancy-puerperal cycle. Thus, evaluating the quality of life of this population can redirect the implementation of innovative practices in the quest to make them more effective and real for the promotion of a more humanized care. This study explores the associations between the obstetric factors with affected areas of health-related quality of life of pregnant women with habitual risk. Methods: This is a cross-sectional, quantitative study conducted in three public facilities and a private service that provides prenatal care in the city of Fortaleza, Ceara, Brazil. The sample consisted of 261 pregnant women who underwent low-risk prenatal care and were interviewed from September to November 2014. The collection instruments were a questionnaire containing socio-demographic and obstetric variables, in addition to the Brazilian version of the Mother scale Generated Index (MGI) characterized by being a specific and objective instrument, consisting of a single sheet and subdivided into three stages. It allows identifying the areas of life of the pregnant woman that are most affected, which could go unnoticed by the pre-formulated measurement instruments. The obstetric data, as well as the data concerning the application of the MGI scale, were compiled and analyzed through the statistical program Statistical Package for the Social Sciences (SPSS), version 20.0. After the compilation, a descriptive analysis was carried out. Then, associations were made between some variables. The tests applied were the Pearson Chi-Square and the Fisher's exact test. The odds ratio was also calculated. These associations were considered statistically significant when the p (probability) value was less than or equal to a level of 5% (α = 0.05) in the tests performed. Results: The variables that negatively reflected the quality of life of the pregnant women and presented a significant association with the polaciuria were: gestational age (p = 0.022) and parity (p = 0.048). Episodes of nausea and vomiting also showed significant with gestational age correlation (p = 0.0001). Evaluating the crossing of stress, we observed a significant association with parity (p = 0.0001). In turn, emotional lability revealed dependence on the variable type of delivery (p = 0.009). Conclusion: The health professionals involved in the assistance to the pregnant woman can understand how the process of gestation is experienced, considering all its peculiar transformations; to meet their individual needs, stimulating their autonomy and their power of choice, envisaging the achievement of a better quality of life related to health in the perspective of health promotion.

Keywords: health-related quality of life, obstetric nursing, pregnant women, prenatal care

Procedia PDF Downloads 263
256 Head and Neck Extranodal Rosai-Dorfman Disease- Utility of immunohistochemistry

Authors: Beverly Wang

Abstract:

Background: Rosai-Dorfman disease (RDD), aka sinus histiocytosis with massive lymphadenopathy, is a rare, idiopathic histiocytic proliferative disorder. Although RDD can be seen involving the head and neck lymph nodes, rarely it can affect other extranodal sites. It present 3 unique cases of RDD affecting the nasal cavity, paranasal sinuses, and ear canal. The initial clinical presentation on two cases mimicked a malignant neoplasm. The 3rd case of RDD co-existed with a cholesteatoma of the ear canal. The clinical presentation, histology and immunohistochemical stains, and radiographic findings are discussed. Design: An overview of 3 cases of RDD affected sinonasal cavity and ear canal from UCI Medical Center was conducted. Case 1: A 61 year old male complaining of breathing difficulty presented with bilateral polypoid sinonasal masses and severe nasal obstruction. The masses elevated the nasal floor, and involved the anterior nasal septum to lateral wall. It was endoscopically excised. At intraoperative consultation, frozen section reported a pleomorphic spindle cell neoplasm with scattered large atypical spindle cells, resembling a high grade sarcoma. Case 2: A 46 year old male presented with recurrent bilateral maxillary chronic sinusitis with mass formation, clinically suspicious for malignant lymphoma. Excisional tissue sample showed large irregular spindled histiocytes with abundant granular and vacuolated cytoplasm. Case 3: A 36 year old female with a history of asthma initially presented with left-sided chronic otalgia, occasional nausea, vertigo, and fluctuating pain exacerbated by head movement and temperature changes. CT scan revealed an external auditory canal mass extending to the middle ear, coexisting with a small cholesteatoma. Results: The morphology of all cases revealed large atypical spindled histiocytes resembling fibrohistiocytic or myofibroblastic proliferative neoplasms. Scattered emperipolesis was seen. All 3 cases were confirmed as extranodal sinus RDD, confirmed by immunohistochemistry. The large atypical cells were positive for S100, CD68, and CD163. No evidence for malignancy was identified. Case 3 showed concurrent RDD co-existing with a cholesteatoma. Conclusion: Due to its rarity and variable clinical presentations, the diagnosis of RDD is seldom clinically considered. Extranodal sinus RDD morphologically can be pitfall as mimicker of spindly neoplasm, especially at intraoperative consultation. It can create diagnostic and therapeutic challenges. Correlation of radiological findings with histologic features will help to reach the diagnosis.

Keywords: head and neck, extranodal, rosai-dorfman disease, mimicker, immunohistochemistry

Procedia PDF Downloads 37
255 Stochastic Approach for Technical-Economic Viability Analysis of Electricity Generation Projects with Natural Gas Pressure Reduction Turbines

Authors: Roberto M. G. Velásquez, Jonas R. Gazoli, Nelson Ponce Jr, Valério L. Borges, Alessandro Sete, Fernanda M. C. Tomé, Julian D. Hunt, Heitor C. Lira, Cristiano L. de Souza, Fabio T. Bindemann, Wilmar Wounnsoscky

Abstract:

Nowadays, society is working toward reducing energy losses and greenhouse gas emissions, as well as seeking clean energy sources, as a result of the constant increase in energy demand and emissions. Energy loss occurs in the gas pressure reduction stations at the delivery points in natural gas distribution systems (city gates). Installing pressure reduction turbines (PRT) parallel to the static reduction valves at the city gates enhances the energy efficiency of the system by recovering the enthalpy of the pressurized natural gas, obtaining in the pressure-lowering process shaft work and generating electrical power. Currently, the Brazilian natural gas transportation network has 9,409 km in extension, while the system has 16 national and 3 international natural gas processing plants, including more than 143 delivery points to final consumers. Thus, the potential of installing PRT in Brazil is 66 MW of power, which could yearly avoid the emission of 235,800 tons of CO2 and generate 333 GWh/year of electricity. On the other hand, an economic viability analysis of these energy efficiency projects is commonly carried out based on estimates of the project's cash flow obtained from several variables forecast. Usually, the cash flow analysis is performed using representative values of these variables, obtaining a deterministic set of financial indicators associated with the project. However, in most cases, these variables cannot be predicted with sufficient accuracy, resulting in the need to consider, to a greater or lesser degree, the risk associated with the calculated financial return. This paper presents an approach applied to the technical-economic viability analysis of PRTs projects that explicitly considers the uncertainties associated with the input parameters for the financial model, such as gas pressure at the delivery point, amount of energy generated by TRP, the future price of energy, among others, using sensitivity analysis techniques, scenario analysis, and Monte Carlo methods. In the latter case, estimates of several financial risk indicators, as well as their empirical probability distributions, can be obtained. This is a methodology for the financial risk analysis of PRT projects. The results of this paper allow a more accurate assessment of the potential PRT project's financial feasibility in Brazil. This methodology will be tested at the Cuiabá thermoelectric plant, located in the state of Mato Grosso, Brazil, and can be applied to study the potential in other countries.

Keywords: pressure reduction turbine, natural gas pressure drop station, energy efficiency, electricity generation, monte carlo methods

Procedia PDF Downloads 82
254 Fuzzy Availability Analysis of a Battery Production System

Authors: Merve Uzuner Sahin, Kumru D. Atalay, Berna Dengiz

Abstract:

In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.

Keywords: availability analysis, battery production system, fuzzy sets, triangular fuzzy numbers (TFNs)

Procedia PDF Downloads 195
253 Exploring Disengaging and Engaging Behavior of Doctoral Students

Authors: Salome Schulze

Abstract:

The delay of students in completing their dissertations is a worldwide problem. At the University of South Africa where this research was done, only about a third of the students complete their studies within the required period of time. This study explored the reasons why the students interrupted their studies, and why they resumed their research at a later stage. If this knowledge could be utilised to improve the throughput of doctoral students, it could have significant economic benefits for institutions of higher education while at the same time enhancing their academic prestige. To inform the investigation, attention was given to key theories concerning the learning of doctoral students, namely the situated learning theory, the social capital theory and the self-regulated learning theory, based on the social cognitive theory of learning. Ten students in the faculty of Education were purposefully selected on the grounds of their poor progress, or of having been in the system for too long. The collection of the data was in accordance with a Finnish study, since the two studies had the same aims, namely to investigate student engagement and disengagement. Graphic elicitation interviews, based on visualisations were considered appropriate to collect the data. This method could stimulate the reflection and recall of the participants’ ‘stories’ with very little input from the interviewer. The interviewees were requested to visualise, on paper, their journeys as doctoral students from the time when they first registered. They were to indicate the significant events that occurred and which facilitated their engagement or disengagement. In the interviews that followed, they were requested to elaborate on these motivating or challenging events by explaining when and why they occurred, and what prompted them to resume their studies. The interviews were tape-recorded and transcribed verbatim. Information-rich data were obtained containing visual metaphors. The data indicated that when the students suffered a period of disengagement, it was sometimes related to a lack of self-regulated learning, in particular, a lack of autonomy, and the inability to manage their time effectively. When the students felt isolated from the academic community of practice disengagement also occurred. This included poor guidance by their supervisors, which accordingly deprived them of significant social capital. The study also revealed that situational factors at home or at work were often the main reasons for the students’ procrastinating behaviour. The students, however, remained in the system. They were motivated towards a renewed engagement with their studies if they were self-regulated learners, and if they felt a connectedness with the academic community of practice because of positive relationships with their supervisors and of participation in the activities of the community (e.g., in workshops or conferences). In support of their learning, networking with significant others who were sources of information provided the students with the necessary social capital. Generally, institutions of higher education cannot address the students’ personal issues directly, but they can deal with key institutional factors in order to improve the throughput of doctoral students. It is also suggested that graphic elicitation interviews be used more often in social research that investigates the learning and development of the students.

Keywords: doctoral students, engaging and disengaging experiences, graphic elicitation interviews, student procrastination

Procedia PDF Downloads 172
252 Geomorphology and Flood Analysis Using Light Detection and Ranging

Authors: George R. Puno, Eric N. Bruno

Abstract:

The natural landscape of the Philippine archipelago plus the current realities of climate change make the country vulnerable to flood hazards. Flooding becomes the recurring natural disaster in the country resulting to lose of lives and properties. Musimusi is among the rivers which exhibited inundation particularly at the inhabited floodplain portion of its watershed. During the event, rescue operations and distribution of relief goods become a problem due to lack of high resolution flood maps to aid local government unit identify the most affected areas. In the attempt of minimizing impact of flooding, hydrologic modelling with high resolution mapping is becoming more challenging and important. This study focused on the analysis of flood extent as a function of different geomorphologic characteristics of Musimusi watershed. The methods include the delineation of morphometric parameters in the Musimusi watershed using Geographic Information System (GIS) and geometric calculations tools. Digital Terrain Model (DTM) as one of the derivatives of Light Detection and Ranging (LiDAR) technology was used to determine the extent of river inundation involving the application of Hydrologic Engineering Center-River Analysis System (HEC-RAS) and Hydrology Modelling System (HEC-HMS) models. The digital elevation model (DEM) from synthetic Aperture Radar (SAR) was used to delineate watershed boundary and river network. Datasets like mean sea level, river cross section, river stage, discharge and rainfall were also used as input parameters. Curve number (CN), vegetation, and soil properties were calibrated based on the existing condition of the site. Results showed that the drainage density value of the watershed is low which indicates that the basin is highly permeable subsoil and thick vegetative cover. The watershed’s elongation ratio value of 0.9 implies that the floodplain portion of the watershed is susceptible to flooding. The bifurcation ratio value of 2.1 indicates higher risk of flooding in localized areas of the watershed. The circularity ratio value (1.20) indicates that the basin is circular in shape, high discharge of runoff and low permeability of the subsoil condition. The heavy rainfall of 167 mm brought by Typhoon Seniang last December 29, 2014 was characterized as high intensity and long duration, with a return period of 100 years produced 316 m3s-1 outflows. Portion of the floodplain zone (1.52%) suffered inundation with 2.76 m depth at the maximum. The information generated in this study is helpful to the local disaster risk reduction management council in monitoring the affected sites for more appropriate decisions so that cost of rescue operations and relief goods distribution is minimized.

Keywords: flooding, geomorphology, mapping, watershed

Procedia PDF Downloads 205
251 The Relevance of Personality Traits and Networking in New Ventures’ Success

Authors: Caterina Muzzi, Sergio Albertini, Davide Giacomini

Abstract:

The research is aimed to investigate the role of young entrepreneurs’ personality traits and their contextual background on the success of entrepreneurial initiatives. In the literature, the debate is still open about the main drivers in predicting entrepreneurial success. Classical theories are focused on looking at specific personality traits that could lead to successful start-ups initiatives, while emerging approaches are more interested in young entrepreneurs’ contextual background (such as the family of origin, the previous experience and their professional network). An online survey was submitted to the participants of an entrepreneurial training initiative organised by the Italian Young Entrepreneurs Association (Confindustria) in Brescia headquarter (AIB). At the time the authors started data collection for this research, the third edition of the initiative was just concluded and involved a total amount of 37 young future entrepreneurs. In the literature General self-efficacy (GSE) and, more specifically, entrepreneurial self-efficacy (ESE) have often been associated to positive performances, as they allow future entrepreneurs to effectively cope with entrepreneurial activities, both at an early stage and in new venture management. In a counter-intuitive manner, optimism is not always associated with entrepreneurial positive results. Too optimistic people risk taking hazardous risks and some authors suggest that moderately optimistic entrepreneurs achieve more positive results than over-optimistic ones. Indeed highly optimistic individuals often hold unrealistic expectations, discount negative information, and mentally reconstruct experiences so as to avoid contradictions The importance of context has been increasingly considered in entrepreneurship literature and its role strongly emerges starting from the earliest entrepreneurial stage and it is crucial to transform the “intention of entrepreneurship” into the actual start-up. Furthermore, coherently with the “network approach to entrepreneurship”, context embeddedness allow future entrepreneurs to leverage relationships built through previous experiences and/or thanks to the fact of belonging to families of entrepreneurs. For the purpose of this research, entrepreneurial success was measured by the fact of having or not founded a new venture after the training initiative. In this research, the authors measured GSE, ESE and optimism using already tested items that showed to be reliable also in this case. They collected 36 completed questionnaires. The t-test for independent samples run to measure significant differences in means between those that already funded the new venture and those that did not. No significant differences emerged with respect to all the tested personality traits, but a logistic regression analysis, run with contextual variables as independent ones, showed that personal and professional networking, made both before and during the master, is the most relevant variable in determining new venture success. These findings shed more light on the process of new venture foundation and could encourage national and local policy makers to invest on networking as one of the main drivers that could support the creation of new ventures.

Keywords: entrepreneurship, networking, new ventures, personality traits

Procedia PDF Downloads 117
250 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks

Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios

Abstract:

To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.

Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand

Procedia PDF Downloads 114
249 A Variational Reformulation for the Thermomechanically Coupled Behavior of Shape Memory Alloys

Authors: Elisa Boatti, Ulisse Stefanelli, Alessandro Reali, Ferdinando Auricchio

Abstract:

Thanks to their unusual properties, shape memory alloys (SMAs) are good candidates for advanced applications in a wide range of engineering fields, such as automotive, robotics, civil, biomedical, aerospace. In the last decades, the ever-growing interest for such materials has boosted several research studies aimed at modeling their complex nonlinear behavior in an effective and robust way. Since the constitutive response of SMAs is strongly thermomechanically coupled, the investigation of the non-isothermal evolution of the material must be taken into consideration. The present study considers an existing three-dimensional phenomenological model for SMAs, able to reproduce the main SMA properties while maintaining a simple user-friendly structure, and proposes a variational reformulation of the full non-isothermal version of the model. While the considered model has been thoroughly assessed in an isothermal setting, the proposed formulation allows to take into account the full nonisothermal problem. In particular, the reformulation is inspired to the GENERIC (General Equations for Non-Equilibrium Reversible-Irreversible Coupling) formalism, and is based on a generalized gradient flow of the total entropy, related to thermal and mechanical variables. Such phrasing of the model is new and allows for a discussion of the model from both a theoretical and a numerical point of view. Moreover, it directly implies the dissipativity of the flow. A semi-implicit time-discrete scheme is also presented for the fully coupled thermomechanical system, and is proven unconditionally stable and convergent. The correspondent algorithm is then implemented, under a space-homogeneous temperature field assumption, and tested under different conditions. The core of the algorithm is composed of a mechanical subproblem and a thermal subproblem. The iterative scheme is solved by a generalized Newton method. Numerous uniaxial and biaxial tests are reported to assess the performance of the model and algorithm, including variable imposed strain, strain rate, heat exchange properties, and external temperature. In particular, the heat exchange with the environment is the only source of rate-dependency in the model. The reported curves clearly display the interdependence between phase transformation strain and material temperature. The full thermomechanical coupling allows to reproduce the exothermic and endothermic effects during respectively forward and backward phase transformation. The numerical tests have thus demonstrated that the model can appropriately reproduce the coupled SMA behavior in different loading conditions and rates. Moreover, the algorithm has proved effective and robust. Further developments are being considered, such as the extension of the formulation to the finite-strain setting and the study of the boundary value problem.

Keywords: generalized gradient flow, GENERIC formalism, shape memory alloys, thermomechanical coupling

Procedia PDF Downloads 198
248 From the Classroom to Digital Learning Environments: An Action Research on Pedagogical Practices in Higher Education

Authors: Marie Alexandre, Jean Bernatchez

Abstract:

This paper focuses on the complexity of the face-to-face-to-distance learning transition process. Our research action aims to support the process of transition from classroom to distance learning for teachers in higher education with regard to pedagogical practices that can meet the various needs of students using digital learning environments. In Quebec and elsewhere in the world, the advent of digital education is helping to transform teaching, which is significantly changing the role of teachers. While distance education implies a dissociation of teaching and learning to a variable degree in space and time, distance education (DE) is becoming more and increasingly becoming a preferred option for maintaining the delivery of certain programs and providing access to programs and to provide access to quality activities throughout Quebec. Given the impact of teaching practices on educational success, this paper reports on the results of three research objectives: 1) To document teachers' knowledge of teaching in distance education through the design, experimentation and production of a repertoire of the determinants of pedagogical practices in response to students' needs. 2) Explain, according to a gendered logic, the adequacy between the pedagogical practices implemented in distance learning and the response to the profiles and needs expressed by students using digital learning environments; 3) Produce a model of a support approach during the process of transition from classroom to distance learning at the college level. A mixed methodology, i.e., a quantitative component (questionnaire survey) and a qualitative component (explanatory interviews and living lab) was used in cycles that were part of an ongoing validation process. The intervention includes the establishment of a professional collaboration group, webinars training webinars for the participating teachers on the didactic issue of knowledge-teaching in FAD, the didactic use of technologies, and the differentiated socialization models of educational success in college education. All of the tools developed will be used by partners in the target environment as well as by all teacher educators, students in initial teacher training, practicing teachers, and the general public. The results show that access to training leading to qualifications and commitment to educational success reflects the existing links between the people in the educational community. The relational stakes of being present in distance education take on multiple configurations and different dimensions of learning testify to needs and realities that are sometimes distinct depending on the life cycle. This project will be of interest to partners in the targeted field as well as to all teacher trainers, students in initial teacher training, practicing college teachers, and to university professors. The entire educational community will benefit from digital resources in education. The scientific knowledge resulting from this action research will benefit researchers in the fields of pedagogy, didactics, teacher training and pedagogy in higher education in a digital context.

Keywords: action research, didactics, digital learning environment, distance learning, higher education, pedagogy technological, pedagogical content knowledge

Procedia PDF Downloads 51
247 Microgrid Design Under Optimal Control With Batch Reinforcement Learning

Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion

Abstract:

Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.

Keywords: batch-constrained reinforcement learning, control, design, optimal

Procedia PDF Downloads 94
246 Analysis and Comparison of Asymmetric H-Bridge Multilevel Inverter Topologies

Authors: Manel Hammami, Gabriele Grandi

Abstract:

In recent years, multilevel inverters have become more attractive for single-phase photovoltaic (PV) systems, due to their known advantages over conventional H-bridge pulse width-modulated (PWM) inverters. They offer improved output waveforms, smaller filter size, lower total harmonic distortion (THD), higher output voltages and others. The most common multilevel converter topologies, presented in literature, are the neutral-point-clamped (NPC), flying capacitor (FC) and Cascaded H-Bridge (CHB) converters. In both NPC and FC configurations, the number of components drastically increases with the number of levels what leads to complexity of the control strategy, high volume, and cost. Whereas, increasing the number of levels in case of the cascaded H-bridge configuration is a flexible solution. However, it needs isolated power sources for each stage, and it can be applied to PV systems only in case of PV sub-fields. In order to improve the ratio between the number of output voltage levels and the number of components, several hybrids and asymmetric topologies of multilevel inverters have been proposed in the literature such as the FC asymmetric H-bridge (FCAH) and the NPC asymmetric H-bridge (NPCAH) topologies. Another asymmetric multilevel inverter configuration that could have interesting applications is the cascaded asymmetric H-bridge (CAH), which is based on a modular half-bridge (two switches and one capacitor, also called level doubling network, LDN) cascaded to a full H-bridge in order to double the output voltage level. This solution has the same number of switches as the above mentioned AH configurations (i.e., six), and just one capacitor (as the FCAH). CAH is becoming popular, due to its simple, modular and reliable structure, and it can be considered as a retrofit which can be added in series to an existing H-Bridge configuration in order to double the output voltage levels. In this paper, an original and effective method for the analysis of the DC-link voltage ripple is given for single-phase asymmetric H-bridge multilevel inverters based on level doubling network (LDN). Different possible configurations of the asymmetric H-Bridge multilevel inverters have been considered and the analysis of input voltage and current are analytically determined and numerically verified by Matlab/Simulink for the case of cascaded asymmetric H-bridge multilevel inverters. A comparison between FCAH and the CAH configurations is done on the basis of the analysis of the DC and voltage ripple for the DC source (i.e., the PV system). The peak-to-peak DC and voltage ripple amplitudes are analytically calculated over the fundamental period as a function of the modulation index. On the basis of the maximum peak-to-peak values of low frequency and switching ripple voltage components, the DC capacitors can be designed. Reference is made to unity output power factor, as in case of most of the grid-connected PV generation systems. Simulation results will be presented in the full paper in order to prove the effectiveness of the proposed developments in all the operating conditions.

Keywords: asymmetric inverters, dc-link voltage, level doubling network, single-phase multilevel inverter

Procedia PDF Downloads 183
245 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry

Authors: C. A. Barros, Ana P. Barroso

Abstract:

Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.

Keywords: automotive Industry, industry 4.0, Internet of Things, IATF 16949:2016, measurement system analysis

Procedia PDF Downloads 181
244 The Impact of Shifting Trading Pattern from Long-Haul to Short-Sea to the Car Carriers’ Freight Revenues

Authors: Tianyu Wang, Nikita Karandikar

Abstract:

The uncertainty around cost, safety, and feasibility of the decarbonized shipping fuels has made it increasingly complex for the shipping companies to set pricing strategies and forecast their freight revenues going forward. The increase in the green fuel surcharges will ultimately influence the automobile’s consumer prices. The auto shipping demand (ton-miles) has been gradually shifting from long-haul to short-sea trade over the past years following the relocation of the original equipment manufacturer (OEM) manufacturing to regions such as South America and Southeast Asia. The objective of this paper is twofold: 1) to investigate the car-carriers freight revenue development over the years when the trade pattern is gradually shifting towards short-sea exports 2) to empirically identify the quantitative impact of such trade pattern shifting to mainly freight rate, but also vessel size, fleet size as well as Green House Gas (GHG) emission in Roll on-Roll Off (Ro-Ro) shipping. In this paper, a model of analyzing and forecasting ton-miles and freight revenues for the trade routes of AS-NA (Asia to North America), EU-NA (Europe to North America), and SA-NA (South America to North America) is established by deploying Automatic Identification System (AIS) data and the financial results of a selected car carrier company. More specifically, Wallenius Wilhelmsen Logistics (WALWIL), the Norwegian Ro-Ro carrier listed on Oslo Stock Exchange, is selected as the case study company in this paper. AIS-based ton-mile datasets of WALWIL vessels that are sailing into North America region from three different origins (Asia, Europe, and South America), together with WALWIL’s quarterly freight revenues as reported in trade segments, will be investigated and compared for the past five years (2018-2022). Furthermore, ordinary‐least‐square (OLS) regression is utilized to construct the ton-mile demand and freight revenue forecasting. The determinants of trade pattern shifting, such as import tariffs following the China-US trade war and fuel prices following the 0.1% Emission Control Areas (ECA) zone requirement after IMO2020 will be set as key variable inputs to the machine learning model. The model will be tested on another newly listed Norwegian Car Carrier, Hoegh Autoliner, to forecast its 2022 financial results and to validate the accuracy based on its actual results. GHG emissions on the three routes will be compared and discussed based on a constant emission per mile assumption and voyage distances. Our findings will provide important insights about 1) the trade-off evaluation between revenue reduction and energy saving with the new ton-mile pattern and 2) how the trade flow shifting would influence the future need for the vessel and fleet size.

Keywords: AIS, automobile exports, maritime big data, trade flows

Procedia PDF Downloads 93
243 Working Memory and Audio-Motor Synchronization in Children with Different Degrees of Central Nervous System's Lesions

Authors: Anastasia V. Kovaleva, Alena A. Ryabova, Vladimir N. Kasatkin

Abstract:

Background: The most simple form of entrainment to a sensory (typically auditory) rhythmic stimulus involves perceiving and synchronizing movements with an isochronous beat with one level of periodicity, such as that produced by a metronome. Children with pediatric cancer usually treated with chemo- and radiotherapy. Because of such treatment, psychologists and health professionals declare cognitive and motor abilities decline in cancer patients. The purpose of our study was to measure working memory characteristics with association with audio-motor synchronization tasks, also involved some memory resources, in children with different degrees of central nervous system lesions: posterior fossa tumors, acute lymphoblastic leukemia, and healthy controls. Methods: Our sample consisted of three groups of children: children treated for posterior fossa tumors (PFT-group, n=42, mean age 12.23), children treated for acute lymphoblastic leukemia (ALL-group, n=11, mean age 11.57) and neurologically healthy children (control group, n=36, mean age 11.67). Participants were tested for working memory characteristics with Cambridge Neuropsychological Test Automated Battery (CANTAB). Pattern recognition memory (PRM) and spatial working memory (SWM) tests were applied. Outcome measures of PRM test include the number and percentage of correct trials and latency (speed of participant’s response), and measures of SWM include errors, strategy, and latency. In the synchronization tests, the instruction was to tap out a regular beat (40, 60, 90 and 120 beats per minute) in synchrony with the rhythmic sequences that were played. This meant that for the sequences with an isochronous beat, participants were required to tap into every auditory event. Variations of inter-tap-intervals and deviations of children’s taps from the metronome were assessed. Results: Analysis of variance revealed the significant effect of group (ALL, PFT and control) on such parameters as short-term PRM, SWM strategy and errors. Healthy controls demonstrated more correctly retained elements, better working memory strategy, compared to cancer patients. Interestingly that ALL patients chose the bad strategy, but committed significantly less errors in SWM test then PFT and controls did. As to rhythmic ability, significant associations of working memory were found out only with 40 bpm rhythm: the less variable were inter-tap-intervals of the child, the more elements in memory he/she could retain. The ability to audio-motor synchronization may be related to working memory processes mediated by the prefrontal cortex whereby each sensory event is actively retrieved and monitored during rhythmic sequencing. Conclusion: Our results suggest that working memory, tested with appropriate cognitive methods, is associated with the ability to synchronize movements with rhythmic sounds, especially in sub-second intervals (40 per minute).

Keywords: acute lymphoblastic leukemia (ALL), audio-motor synchronization, posterior fossa tumor, working memory

Procedia PDF Downloads 282
242 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries

Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni

Abstract:

In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.

Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm

Procedia PDF Downloads 97
241 Gas Metal Arc Welding of Clad Plates API 5L X-60/316L Applying External Magnetic Fields during Welding

Authors: Blanca A. Pichardo, Victor H. Lopez, Melchor Salazar, Rafael Garcia, Alberto Ruiz

Abstract:

Clad pipes in comparison to plain carbon steel pipes offer the oil and gas industry high corrosion resistance, reduction in economic losses due to pipeline failures and maintenance, lower labor risk, prevent pollution and environmental damage due to hydrocarbons spills caused by deteriorated pipelines. In this context, it is paramount to establish reliable welding procedures to join bimetallic plates or pipes. Thus, the aim of this work is to study the microstructure and mechanical behavior of clad plates welded by the gas metal arc welding (GMAW) process. A clad of 316L stainless steel was deposited onto API 5L X-60 plates by overlay welding with the GMAW process. Welding parameters were, 22.5 V, 271 A, heat input 1,25 kJ/mm, shielding gas 98% Ar + 2% O₂, reverse polarity, torch displacement speed 3.6 mm/s, feed rate 120 mm/s, electrode diameter 1.2 mm and application of an electromagnetic field of 3.5 mT. The overlay welds were subjected to macro-structural and microstructural characterization. After manufacturing the clad plates, a single V groove joint was machined with a 60° bevel and 1 mm root face. GMA welding of the bimetallic plates was performed in four passes with ER316L-Si filler for the root pass and an ER70s-6 electrode for the subsequent welding passes. For joining the clad plates, an electromagnetic field was applied with 2 purposes; to improve the microstructural characteristics and to assist the stability of the electric arc during welding in order to avoid magnetic arc blow. The welds were macro and microstructurally characterized and the mechanical properties were also evaluated. Vickers microhardness (100 g load for 10 s) measurements were made across the welded joints at three levels. The first profile, at the 316L stainless steel cladding, was quite even with a value of approximately 230 HV. The second microhardness profile showed high values in the weld metal, ~400 HV, this was due to the formation of a martensitic microstructure by dilution of the first welding pass with the second. The third profile crossed the third and fourth welding passes and an average value of 240 HV was measured. In the tensile tests, yield strength was between 400 to 450 MPa with a tensile strength of ~512 MPa. In the Charpy impact tests, the results were 86 and 96 J for specimens with the notch in the face and in the root of the weld bead, respectively. The results of the mechanical properties were in the range of the API 5L X-60 base material. The overlap welding process used for cladding is not suitable for large components, however, it guarantees a metallurgical bond, unlike the most commonly used processes such as thermal expansion. For welding bimetallic plates, control of the temperature gradients is key to avoid distortions. Besides, the dissimilar nature of the bimetallic plates gives rise to the formation of a martensitic microstructure during welding.

Keywords: clad pipe, dissimilar welding, gas metal arc welding, magnetic fields

Procedia PDF Downloads 134
240 Quality of Life Responses of Students with Intellectual Disabilities Entering an Inclusive, Residential Post-Secondary Program

Authors: Mary A. Lindell

Abstract:

Adults with intellectual disabilities (ID) are increasingly attending postsecondary institutions, including inclusive residential programs at four-year universities. The legislation, national organizations, and researchers support developing postsecondary education (PSE) options for this historically underserved population. Simultaneously, researchers are assessing the quality of life indicators (QOL) for people with ID. This study explores the quality of life characteristics for individuals with ID entering a two-year PSE program. A survey aligned with the PSE program was developed and administered to participants before they began their college program (in future studies, the same survey will be administered 6 months and 1 year after graduating). Employment, income, and housing are frequently cited QOL measures. People with disabilities, and especially people with ID, are more likely to experience unemployment and low wages than people without disabilities. PSE improves adult outcomes (e.g., employment, income, housing) for people with and without disabilities. Similarly, adults with ID who attend PSE are more likely to be employed than their peers who do not attend PSE; however, adults with ID are least likely among their typical peers and other students with disabilities to attend PSE. There is increased attention to providing individuals with ID access to PSE and more research is needed regarding the characteristics of students attending PSE. This study focuses on the participants of a fully residential two-year program for individuals with ID. Students earn an Applied Skills Certificate while focusing on five benchmarks: self-care, home care, relationships, academics, and employment. To create a QOL measure, the goals of the PSE program were identified, and possible assessment items were initially selected from the National Core Indicators (NCI) and the National Transition Longitudinal Survey 2 (NTLS2) that aligned with the five program goals. Program staff and advisory committee members offered input on potential item alignment with program goals and expected value to students with ID in the program. National experts in researching QOL outcomes of people with ID were consulted and concurred that the items selected would be useful in measuring the outcomes of postsecondary students with ID. The measure was piloted, modified, and administered to incoming students with ID. Research questions: (1) In what ways are students with ID entering a two-year PSE program similar to individuals with ID who complete the NCI and NTLS2 surveys? (2) In what ways are students with ID entering a two-year PSE program different than individuals with ID who completed the NCI and NTLS2 surveys? The process of developing a QOL measure specific to a PSE program for individuals with ID revealed that many of the items in comprehensive national QOL measures are not relevant to stake-holders of this two-year residential inclusive PSE program. Specific responses of students with ID entering an inclusive PSE program will be presented as well as a comparison to similar items on national QOL measures. This study explores the characteristics of students with ID entering a residential, inclusive PSE program. This information is valuable for, researchers, educators, and policy makers as PSE programs become more accessible for individuals with ID.

Keywords: intellectual disabilities, inclusion, post-secondary education, quality of life

Procedia PDF Downloads 77
239 Modeling Driving Distraction Considering Psychological-Physical Constraints

Authors: Yixin Zhu, Lishengsa Yue, Jian Sun, Lanyue Tang

Abstract:

Modeling driving distraction in microscopic traffic simulation is crucial for enhancing simulation accuracy. Current driving distraction models are mainly derived from physical motion constraints under distracted states, in which distraction-related error terms are added to existing microscopic driver models. However, the model accuracy is not very satisfying, due to a lack of modeling the cognitive mechanism underlying the distraction. This study models driving distraction based on the Queueing Network Human Processor model (QN-MHP). This study utilizes the queuing structure of the model to perform task invocation and switching for distracted operation and control of the vehicle under driver distraction. Based on the assumption of the QN-MHP model about the cognitive sub-network, server F is a structural bottleneck. The latter information must wait for the previous information to leave server F before it can be processed in server F. Therefore, the waiting time for task switching needs to be calculated. Since the QN-MHP model has different information processing paths for auditory information and visual information, this study divides driving distraction into two types: auditory distraction and visual distraction. For visual distraction, both the visual distraction task and the driving task need to go through the visual perception sub-network, and the stimuli of the two are asynchronous, which is called stimulus on asynchrony (SOA), so when calculating the waiting time for switching tasks, it is necessary to consider it. In the case of auditory distraction, the auditory distraction task and the driving task do not need to compete for the server resources of the perceptual sub-network, and their stimuli can be synchronized without considering the time difference in receiving the stimuli. According to the Theory of Planned Behavior for drivers (TPB), this study uses risk entropy as the decision criterion for driver task switching. A logistic regression model is used with risk entropy as the independent variable to determine whether the driver performs a distraction task, to explain the relationship between perceived risk and distraction. Furthermore, to model a driver’s perception characteristics, a neurophysiological model of visual distraction tasks is incorporated into the QN-MHP, and executes the classical Intelligent Driver Model. The proposed driving distraction model integrates the psychological cognitive process of a driver with the physical motion characteristics, resulting in both high accuracy and interpretability. This paper uses 773 segments of distracted car-following in Shanghai Naturalistic Driving Study data (SH-NDS) to classify the patterns of distracted behavior on different road facilities and obtains three types of distraction patterns: numbness, delay, and aggressiveness. The model was calibrated and verified by simulation. The results indicate that the model can effectively simulate the distracted car-following behavior of different patterns on various roadway facilities, and its performance is better than the traditional IDM model with distraction-related error terms. The proposed model overcomes the limitations of physical-constraints-based models in replicating dangerous driving behaviors, and internal characteristics of an individual. Moreover, the model is demonstrated to effectively generate more dangerous distracted driving scenarios, which can be used to construct high-value automated driving test scenarios.

Keywords: computational cognitive model, driving distraction, microscopic traffic simulation, psychological-physical constraints

Procedia PDF Downloads 59
238 Analysis of Long-Term Response of Seawater to Change in CO₂, Heavy Metals and Nutrients Concentrations

Authors: Igor Povar, Catherine Goyet

Abstract:

The seawater is subject to multiple external stressors (ES) including rising atmospheric CO2 and ocean acidification, global warming, atmospheric deposition of pollutants and eutrophication, which deeply alter its chemistry, often on a global scale and, in some cases, at the degree significantly exceeding that in the historical and recent geological verification. In ocean systems the micro- and macronutrients, heavy metals, phosphor- and nitrogen-containing components exist in different forms depending on the concentrations of various other species, organic matter, the types of minerals, the pH etc. The major limitation to assessing more strictly the ES to oceans, such as pollutants (atmospheric greenhouse gas, heavy metals, nutrients as nitrates and phosphates) is the lack of theoretical approach which could predict the ocean resistance to multiple external stressors. In order to assess the abovementioned ES, the research has applied and developed the buffer theory approach and theoretical expressions of the formal chemical thermodynamics to ocean systems, as heterogeneous aqueous systems. The thermodynamic expressions of complex chemical equilibria, involving acid-base, complex formation and mineral ones have been deduced. This thermodynamic approach utilizes thermodynamic relationships coupled with original mass balance constraints, where the solid phases are explicitly expressed. The ocean sensitivity to different external stressors and changes in driving factors are considered in terms of derived buffering capacities or buffer factors for heterogeneous systems. Our investigations have proved that the heterogeneous aqueous systems, as ocean and seas are, manifest their buffer properties towards all their components, not only to pH, as it has been known so far, for example in respect to carbon dioxide, carbonates, phosphates, Ca2+, Mg2+, heavy metal ions etc. The derived expressions make possible to attribute changes in chemical ocean composition to different pollutants. These expressions are also useful for improving the current atmosphere-ocean-marine biogeochemistry models. The major research questions, to which the research responds, are: (i.) What kind of contamination is the most harmful for Future Ocean? (ii.) What are chemical heterogeneous processes of the heavy metal release from sediments and minerals and its impact to the ocean buffer action? (iii.) What will be the long-term response of the coastal ocean to the oceanic uptake of anthropogenic pollutants? (iv.) How will change the ocean resistance in terms of future chemical complex processes and buffer capacities and its response to external (anthropogenic) perturbations? The ocean buffer capacities towards its main components are recommended as parameters that should be included in determining the most important ocean factors which define the response of ocean environment at the technogenic loads increasing. The deduced thermodynamic expressions are valid for any combination of chemical composition, or any of the species contributing to the total concentration, as independent state variable.

Keywords: atmospheric greenhouse gas, chemical thermodynamics, external stressors, pollutants, seawater

Procedia PDF Downloads 111
237 Role of Vitamin-D in Reducing Need for Supplemental Oxygen Among COVID-19 Patients

Authors: Anita Bajpai, Sarah Duan, Ashlee Erskine, Shehzein Khan, Raymond Kramer

Abstract:

Introduction: This research focuses on exploring the beneficial effects if any, of Vitamin-D in reducing the need for supplemental oxygen among hospitalized COVID-19 patients. Two questions are investigated – Q1)Doeshaving a healthy level of baselineVitamin-D 25-OH (≥ 30ng/ml) help,andQ2) does administering Vitamin-D therapy after-the-factduring inpatient hospitalization help? Methods/Study Design: This is a comprehensive, retrospective, observational study of all inpatients at RUHS from March through December 2020 who tested positive for COVID-19 based on real-time reverse transcriptase–polymerase chain reaction assay of nasal and pharyngeal swabs and rapid assay antigen test. To address Q1, we looked atall N1=182 patients whose baseline plasma Vitamin-D 25-OH was known and who needed supplemental oxygen. Of this, a total of 121 patients had a healthy Vitamin-D level of ≥30 ng/mlwhile the remaining 61 patients had low or borderline (≤ 29.9ng/ml)level. Similarly, for Q2, we looked at a total of N2=893 patients who were given supplemental oxygen, of which713 were not given Vitamin-D and 180 were given Vitamin-D therapy. The numerical value of the maximum amount of oxygen flow rate(dependent variable) administered was recorded for each patient. The mean values and associated standard deviations for each group were calculated. Thesetwo sets of independent data served as the basis for independent, two-sample t-Test statistical analysis. To be accommodative of any reasonable benefitof Vitamin-D, ap-value of 0.10(α< 10%) was set as the cutoff point for statistical significance. Results: Given the large sample sizes, the calculated statistical power for both our studies exceeded the customary norm of 80% or better (β< 0.2). For Q1, the mean value for maximumoxygen flow rate for the group with healthybaseline level of Vitamin-D was 8.6 L/min vs.12.6L/min for those with low or borderline levels, yielding a p-value of 0.07 (p < 0.10) with the conclusion that those with a healthy level of baseline Vitamin-D needed statistically significant lower levels of supplemental oxygen. ForQ2, the mean value for a maximum oxygen flow rate for those not administered Vitamin-Dwas 12.5 L/min vs.12.8L/min for those given Vitamin-D, yielding a p-valueof 0.87 (p > 0.10). We thereforeconcludedthat there was no statistically significant difference in the use of oxygen therapy between those who were or were not administered Vitamin-D after-the-fact in the hospital. Discussion/Conclusion: We found that patients who had healthy levels of Vitamin-D at baseline needed statistically significant lower levels of supplemental oxygen. Vitamin-D is well documented, including in a recent article in the Lancet, for its anti-inflammatory role as an adjuvant in the regulation of cytokines and immune cells. Interestingly, we found no statistically significant advantage for giving Vitamin-D to hospitalized patients. It may be a case of “too little too late”. A randomized clinical trial reported in JAMA also did not find any reduction in hospital stay of patients given Vitamin-D. Such conclusions come with a caveat that any delayed marginal benefits may not have materialized promptly in the presence of a significant inflammatory condition. Since Vitamin-D is a low-cost, low-risk option, it may still be useful on an inpatient basis until more definitive findings are established.

Keywords: COVID-19, vitamin-D, supplemental oxygen, vitamin-D in primary care

Procedia PDF Downloads 128
236 Music Genre Classification Based on Non-Negative Matrix Factorization Features

Authors: Soyon Kim, Edward Kim

Abstract:

In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.

Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)

Procedia PDF Downloads 267
235 A Second Chance to Live and Move: Lumbosacral Spinal Cord Ischemia-Infarction after Cardiac Arrest and the Artery of Adamkiewicz

Authors: Anna Demian, Levi Howard, L. Ng, Leslie Simon, Mark Dragon, A. Desai, Timothy Devlantes, W. David Freeman

Abstract:

Introduction: Out-of-hospital cardiac arrest (OHCA) can carry a high mortality. For survivors, the most common complication is hypoxic-ischemic brain injury (HIBI). Rarely, lumbosacral spinal cord and/or other spinal cord artery ischemia can occur due to anatomic variation and variable mean arterial pressure after the return of spontaneous circulation. We present a case of an OHCA survivor who later woke up with bilateral leg weakness with preserved sensation (ASIA grade B, L2 level). Methods: We describe a clinical, radiographic, and laboratory presentation, as well as a National Library of Medicine (NLM) search engine methodology, characterizing incidence/prevalence of this entity is discussed. A 70-year-old male, a longtime smoker, and alcohol user, suddenly collapsed at a bar surrounded by friends. He had complained of chest pain before collapsing. 911 was called. EMS arrived, and the patient was in pulseless electrical activity (PEA), cardiopulmonary resuscitation (CPR) was initiated, and the patient was intubated, and a LUCAS device was applied for continuous, high-quality CPR in the field by EMS. In the ED, central lines were placed, and thrombolysis was administered for a suspected Pulmonary Embolism (PE). It was a prolonged code that lasted 90 minutes. The code continued with the eventual return of spontaneous circulation. The patient was placed on an epinephrine and norepinephrine drip to maintain blood pressure. ECHO was performed and showed a “D-shaped” ventricle worrisome for PE as well as an ejection fraction around 30%. A CT with PE protocol was performed and confirmed bilateral PE. Results: The patient woke up 24 hours later, following commands, and was extubated. He was found paraplegic below L2 with preserved sensation, with hypotonia and areflexia consistent with “spinal shock” or anterior spinal cord syndrome. MRI thoracic and lumbar spine showed a conus medullaris level spinal cord infarction. The patient was given IV steroids upon initial discovery of cord infarct. NLM search using “cardiac arrest” and “spinal cord infarction” revealed 57 results, with only 8 review articles. Risk factors include age, atherosclerotic disease, and intraaortic balloon pump placement. AoA (Artery of Adamkiewicz) anatomic variation along with existing atherosclerotic factors and low perfusion were also known risk factors. Conclusion: Acute paraplegia from anterior spinal cord infarction of the AoA territory after cardiac arrest is rare. Larger prospective, multicenter trials are needed to examine potential interventions of hypothermia, lumbar drains, which are sometimes used in aortic surgery to reduce ischemia and/or other neuroprotectants.

Keywords: cardiac arrest, spinal cord infarction, artery of Adamkiewicz, paraplegia

Procedia PDF Downloads 170