Search results for: Energy Plus software
764 Inverterless Grid Compatible Micro Turbine Generator
Authors: S. Ozeri, D. Shmilovitz
Abstract:
Micro‐Turbine Generators (MTG) are small size power plants that consist of a high speed, gas turbine driving an electrical generator. MTGs may be fueled by either natural gas or kerosene and may also use sustainable and recycled green fuels such as biomass, landfill or digester gas. The typical ratings of MTGs start from 20 kW up to 200 kW. The primary use of MTGs is for backup for sensitive load sites such as hospitals, and they are also considered a feasible power source for Distributed Generation (DG) providing on-site generation in proximity to remote loads. The MTGs have the compressor, the turbine, and the electrical generator mounted on a single shaft. For this reason, the electrical energy is generated at high frequency and is incompatible with the power grid. Therefore, MTGs must contain, in addition, a power conditioning unit to generate an AC voltage at the grid frequency. Presently, this power conditioning unit consists of a rectifier followed by a DC/AC inverter, both rated at the full MTG’s power. The losses of the power conditioning unit account to some 3-5%. Moreover, the full-power processing stage is a bulky and costly piece of equipment that also lowers the overall system reliability. In this study, we propose a new type of power conditioning stage in which only a small fraction of the power is processed. A low power converter is used only to program the rotor current (i.e. the excitation current which is substantially lower). Thus, the MTG's output voltage is shaped to the desired amplitude and frequency by proper programming of the excitation current. The control is realized by causing the rotor current to track the electrical frequency (which is related to the shaft frequency) with a difference that is exactly equal to the line frequency. Since the phasor of the rotation speed and the phasor of the rotor magnetic field are multiplied, the spectrum of the MTG generator voltage contains the sum and the difference components. The desired difference component is at the line frequency (50/60 Hz), whereas the unwanted sum component is at about twice the electrical frequency of the stator. The unwanted high frequency component can be filtered out by a low-pass filter leaving only the low-frequency output. This approach allows elimination of the large power conditioning unit incorporated in conventional MTGs. Instead, a much smaller and cheaper fractional power stage can be used. The proposed technology is also applicable to other high rotation generator sets such as aircraft power units.Keywords: gas turbine, inverter, power multiplier, distributed generation
Procedia PDF Downloads 243763 Geochemical Evaluation of Metal Content and Fluorescent Characterization of Dissolved Organic Matter in Lake Sediments
Authors: Fani Sakellariadou, Danae Antivachis
Abstract:
Purpose of this paper is to evaluate the environmental status of a coastal Mediterranean lake, named Koumoundourou, located in the northeastern coast of Elefsis Bay, in the western region of Attiki in Greece, 15 km far from Athens. It is preserved from ancient times having an important archaeological interest. Koumoundourou lake is also considered as a valuable wetland accommodating an abundant flora and fauna, with a variety of bird species including a few world’s threatened ones. Furthermore, it is a heavily modified lake, affected by various anthropogenic pollutant sources which provide industrial, urban and agricultural contaminants. The adjacent oil refineries and the military depot are the major pollution providers furnishing with crude oil spills and leaks. Moreover, the lake accepts a quantity of groundwater leachates from the major landfill of Athens. The environmental status of the lake results from the intensive land uses combined with the permeable lithology of the surrounding area and the existence of karstic springs which discharge calcareous mountains. Sediment samples were collected along the shoreline of the lake using a Van Veen grab stainless steel sampler. They were studied for the determination of the total metal content and the metal fractionation in geochemical phases as well as the characterization of the dissolved organic matter (DOM). These constituents have a significant role in the ecological consideration of the lake. Metals may be responsible for harmful environmental impacts. The metal partitioning offers comprehensive information for the origin, mode of occurrence, biological and physicochemical availability, mobilization and transport of metals. Moreover, DOM has a multifunctional importance interacting with inorganic and organic contaminants leading to biogeochemical and ecological effects. The samples were digested using microwave heating with a suitable laboratory microwave unit. For the total metal content, the samples were treated with a mixture of strong acids. Then, a sequential extraction procedure was applied for the removal of exchangeable, carbonate hosted, reducible, organic/sulphides and residual fractions. Metal content was determined by an ICP-MS (Perkin Elmer, ICP MASS Spectrophotometer NexION 350D). Furthermore, the DOM was removed via a gentle extraction procedure and then it was characterized by fluorescence spectroscopy using a Perkin-Elmer LS 55 luminescence spectrophotometer equipped with the WinLab 4.00.02 software for data processing (Agilent, Cary Eclipse Fluorescence). Mono dimensional emission, excitation, synchronous-scan excitation and total luminescence spectra were recorded for the classification of chromophoric units present in the aqueous extracts. Total metal concentrations were determined and compared with those of the Elefsis gulf sediments. Element partitioning showed the anthropogenic sources and the contaminant bioavailability. All fluorescence spectra, as well as humification indices, were evaluated in detail to find out the nature and origin of DOM. All the results were compared and interpreted to evaluate the environmental quality of Koumoundourou lake and the need for environmental management and protection.Keywords: anthropogenic contaminant, dissolved organic matter, lake, metal, pollution
Procedia PDF Downloads 160762 “Environmental-Friendly” and “People-Friendly” Project for a New North-East Italian Hospital
Authors: Emanuela Zilli, Antonella Ruffatto, Davide Bonaldo, Stefano Bevilacqua, Tommaso Caputo, Luisa Fontana, Carmelina Saraceno, Antonio Sturaroo, Teodoro Sava, Antonio Madia
Abstract:
The new Hospital in Cittadella - ULSS 6 Euganea Health Trust, in the North-East of Italy (400 beds, project completion date in 2026), will partially take the place of the existing building. Interesting features have been suggested in order to project a modern, “environmental-friendly” and “people-friendly” building. Specific multidisciplinary meetings (involving stakeholders and professionals with different backgrounds) have been organized on a periodic basis in order to guarantee the appropriate implementation of logistic and organizational solutions related to eco-sustainability, integration with the context, and the concept of “design for all” and “humanization of care.” The resulting building will be composed of organic shapes determined by the external environment (sun movement, climate, landscape, pre-existing buildings, roads) and the needs of the internal environment (areas of care and diagnostic-treatment paths reorganized with experience gained during the pandemic), with extensive use of renewable energy, solar panels, a 4th-generation heating system, sanitised and maintainable surfaces. There is particular attention to the quality of the staff areas, which include areas dedicated to psycho-physical well-being (relax points, yoga gym), study rooms, and a centralized conference room. Outdoor recreational spaces and gardens for music and watercolour therapy will be included; atai-chi gym is dedicated to oncology patients. Integration in the urban and social context is emphasized through window placement toward the gardens (maternal-infant, mental health, and rehabilitation wards). Service areas such as dialysis, radiology, and labs have views of the medieval walls, the symbol of the city’s history. The new building has been designed to pursue the maximum level of eco-sustainability, harmony with the environment, and integration with the historical, urban, and social context; the concept of humanization of care has been considered in all the phases of the project management.Keywords: environmental-friendly, humanization, eco-sustainability, new hospital
Procedia PDF Downloads 121761 Investigation of Dry-Blanching and Freezing Methods of Fruits
Authors: Epameinondas Xanthakis, Erik Kaunisto, Alain Le-Bail, Lilia Ahrné
Abstract:
Fruits and vegetables are characterized as perishable food matrices due to their short shelf life as several deterioration mechanisms are being involved. Prior to the common preservation methods like freezing or canning, fruits and vegetables are being blanched in order to inactivate deteriorative enzymes. Both conventional blanching pretreatments and conventional freezing methods hide drawbacks behind their beneficial impacts on the preservation of those matrices. Conventional blanching methods may require longer processing times, leaching of minerals and nutrients due to the contact with the warm water which in turn leads to effluent production with large BOD. An important issue of freezing technologies is the size of the formed ice crystals which is also critical for the final quality of the frozen food as it can cause irreversible damage to the cellular structure and subsequently to degrade the texture and the colour of the product. Herein, the developed microwave blanching methodology and the results regarding quality aspects and enzyme inactivation will be presented. Moreover, heat transfer phenomena, mass balance, temperature distribution, and enzyme inactivation (such as Pectin Methyl Esterase and Ascorbic Acid Oxidase) of our microwave blanching approach will be evaluated based on measurements and computer modelling. The present work is part of the COLDμWAVE project which aims to the development of an innovative environmentally sustainable process for blanching and freezing of fruits and vegetables with improved textural and nutritional quality. In this context, COLDµWAVE will develop tailored equipment for MW blanching of vegetables that has very high energy efficiency and no water consumption. Furthermore, the next steps of this project regarding the development of innovative pathways in MW assisted freezing to improve the quality of frozen vegetables, by exploring in depth previous results acquired by the authors, will be presented. The application of MW assisted freezing process on fruits and vegetables it is expected to lead to improved quality characteristics compared to the conventional freezing. Acknowledgments: COLDμWAVE has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grand agreement No 660067.Keywords: blanching, freezing, fruits, microwave blanching, microwave
Procedia PDF Downloads 268760 Economic Analysis of a Carbon Abatement Technology
Authors: Hameed Rukayat Opeyemi, Pericles Pilidis Pagone Emmanuele, Agbadede Roupa, Allison Isaiah
Abstract:
Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero-emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, the current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbomachinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50% cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low-temperature heat exchanger LTHX (referred to by some authors as air preheater the mixed conductive membrane responsible for oxygen transfer and the high-temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout)–AZEP 85% (85% CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine–AZEP 85% (85% CO2 capture). This paper discusses monte carlo risk analysis of four possible layouts of the AZEP cycle.Keywords: gas turbine, global warming, green house gas, fossil fuel power plants
Procedia PDF Downloads 397759 Low Plastic Deformation Energy to Induce High Superficial Strain on AZ31 Magnesium Alloy Sheet
Authors: Emigdio Mendoza, Patricia Fernandez, Cristian Gomez
Abstract:
Magnesium alloys have generated great interest for several industrial applications because their high specific strength and low density make them a very attractive alternative for the manufacture of various components; however, these alloys present a limitation with their hexagonal crystal structure that limits the deformation mechanisms at room temperature likewise the molding components alternatives, it is for this reason that severe plastic deformation processes have taken a huge relevance recently because these, allow high deformation rates to be applied that induce microstructural changes where the deficiency in the sliding systems is compensated with crystallographic grains reorientations or crystal twinning. The present study reports a statistical analysis of process temperature, number of passes and shear angle with respect to the shear stress in severe plastic deformation process denominated 'Equal Channel Angular Sheet Drawing (ECASD)' applied to the magnesium alloy AZ31B through Python Statsmodels libraries, additionally a Post-Hoc range test is performed using the Tukey statistical test. Statistical results show that each variable has a p-value lower than 0.05, which allows comparing the average values of shear stresses obtained, which are in the range of 7.37 MPa to 12.23 MPa, lower values in comparison to others severe plastic deformation processes reported in the literature, considering a value of 157.53 MPa as the average creep stress for AZ31B alloy. However, a higher stress level is required when the sheets are processed using a shear angle of 150°, due to a higher level of adjustment applied for the shear die of 150°. Temperature and shear passes are important variables as well, but there is no significant impact on the level of stress applied during the ECASD process. In the processing of AZ31B magnesium alloy sheets, ECASD technique is evidenced as a viable alternative in the modification of the elasto-plastic properties of this alloy, promoting the weakening of the basal texture, which means, a better response to deformation, whereby, during the manufacture of parts by drawing or stamping processes the formation of cracks on the surface can be reduced, presenting an adequate mechanical performance.Keywords: plastic deformation, strain, sheet drawing, magnesium
Procedia PDF Downloads 115758 Investigation for Pixel-Based Accelerated Aging of Large Area Picosecond Photo-Detectors
Authors: I. Tzoka, V. A. Chirayath, A. Brandt, J. Asaadi, Melvin J. Aviles, Stephen Clarke, Stefan Cwik, Michael R. Foley, Cole J. Hamel, Alexey Lyashenko, Michael J. Minot, Mark A. Popecki, Michael E. Stochaj, S. Shin
Abstract:
Micro-channel plate photo-multiplier tubes (MCP-PMTs) have become ubiquitous and are widely considered potential candidates for next generation High Energy Physics experiments due to their picosecond timing resolution, ability to operate in strong magnetic fields, and low noise rates. A key factor that determines the applicability of MCP-PMTs in their lifetime, especially when they are used in high event rate experiments. We have developed a novel method for the investigation of the aging behavior of an MCP-PMT on an accelerated basis. The method involves exposing a localized region of the MCP-PMT to photons at a high repetition rate. This pixel-based method was inspired by earlier results showing that damage to the photocathode of the MCP-PMT occurs primarily at the site of light exposure and that the surrounding region undergoes minimal damage. One advantage of the pixel-based method is that it allows the dynamics of photo-cathode damage to be studied at multiple locations within the same MCP-PMT under different operating conditions. In this work, we use the pixel-based accelerated lifetime test to investigate the aging behavior of a 20 cm x 20 cm Large Area Picosecond Photo Detector (LAPPD) manufactured by INCOM Inc. at multiple locations within the same device under different operating conditions. We compare the aging behavior of the MCP-PMT obtained from the first lifetime test conducted under high gain conditions to the lifetime obtained at a different gain. Through this work, we aim to correlate the lifetime of the MCP-PMT and the rate of ion feedback, which is a function of the gain of each MCP, and which can also vary from point to point across a large area (400 $cm^2$) MCP. The tests were made possible by the uniqueness of the LAPPD design, which allows independent control of the gain of the chevron stacked MCPs. We will further discuss the implications of our results for optimizing the operating conditions of the detector when used in high event rate experiments.Keywords: electron multipliers (vacuum), LAPPD, lifetime, micro-channel plate photo-multipliers tubes, photoemission, time-of-flight
Procedia PDF Downloads 182757 Motor Coordination and Body Mass Index in Primary School Children
Authors: Ingrid Ruzbarska, Martin Zvonar, Piotr Oleśniewicz, Julita Markiewicz-Patkowska, Krzysztof Widawski, Daniel Puciato
Abstract:
Obese children will probably become obese adults, consequently exposed to an increased risk of comorbidity and premature mortality. Body weight may be indirectly determined by continuous development of coordination and motor skills. The level of motor skills and abilities is an important factor that promotes physical activity since early childhood. The aim of the study is to thoroughly understand the internal relations between motor coordination abilities and the somatic development of prepubertal children and to determine the effect of excess body weight on motor coordination by comparing the motor ability levels of children with different body mass index (BMI) values. The data were collected from 436 children aged 7–10 years, without health limitations, fully participating in school physical education classes. Body height was measured with portable stadiometers (Harpenden, Holtain Ltd.), and body mass—with a digital scale (HN-286, Omron). Motor coordination was evaluated with the Kiphard-Schilling body coordination test, Körperkoordinationstest für Kinder. The normality test by Shapiro-Wilk was used to verify the data distribution. The correlation analysis revealed a statistically significant negative association between the dynamic balance and BMI, as well as between the motor quotient and BMI (p<0.01) for both boys and girls. The results showed no effect of gender on the difference in the observed trends. The analysis of variance proved statistically significant differences between normal weight children and their overweight or obese counterparts. Coordination abilities probably play an important role in preventing or moderating the negative trajectory leading to childhood overweight and obesity. At this age, the development of coordination abilities should become a key strategy, targeted at long-term prevention of obesity and the promotion of an active lifestyle in adulthood. Motor performance is essential for implementing a healthy lifestyle in childhood already. Physical inactivity apparently results in motor deficits and a sedentary lifestyle in children, which may be accompanied by excess energy intake and overweight.Keywords: childhood, KTK test, physical education, psychomotor competence
Procedia PDF Downloads 345756 Sustainable Valorization of Wine Production Waste: Unlocking the Potential of Grape Pomace and Lees in the Vinho Verde Region
Authors: Zlatina Genisheva, Pedro Ferreira-Santos, Margarida Soares, Cândida Vilarinho, Joana Carvalho
Abstract:
The wine industry produces significant quantities of waste, much of which remains underutilized as a potential raw material. Typically, this waste is either discarded in the fields or incinerated, leading to environmental concerns. By-products of wine production, like lees and grape pomace, are readily available at relatively low costs and hold promise as raw materials for biochemical conversion into valuable products. Reusing these waste materials is crucial, not only for reducing environmental impact but also for enhancing profitability. The Vinhos Verdes demarcated region, the largest wine-producing area in Portugal, has remained relatively stagnant over time. This project aims to offer an alternative income source for producers in the region while also expanding the limited existing research on this area. The main objective of this project is the study of the sustainable valorization of grape pomace and lees from the production of DOC Vinho Verde. Extraction tests were performed to obtain high-value compounds, targeting phenolic compounds from grape pomace and protein-rich extracts from lees. An environmentally friendly technique, microwave extraction, was used for this process. This method is not only efficient but also aligns with the principles of green chemistry, reducing the use of harmful solvents and minimizing energy consumption. The findings from this study have the potential to open new revenue streams for the region’s wine producers while promoting environmental sustainability. The optimal conditions for extracting proteins from lees involve the use of NaOH at 150ºC. Regardless of the solvent employed, the ideal temperature for obtaining extracts rich in polyphenol compounds and exhibiting strong antioxidant activity is also 150ºC. For grape pomace, extracts with a high concentration of polyphenols and significant antioxidant properties were obtained at 210ºC. However, the highest total tannin concentrations were achieved at 150ºC, while the maximum total flavonoid content was obtained at 170ºC.Keywords: antioxidants, circular economy, polyphenol compounds, waste valorization
Procedia PDF Downloads 23755 High Pressure Thermophysical Properties of Complex Mixtures Relevant to Liquefied Natural Gas (LNG) Processing
Authors: Saif Al Ghafri, Thomas Hughes, Armand Karimi, Kumarini Seneviratne, Jordan Oakley, Michael Johns, Eric F. May
Abstract:
Knowledge of the thermophysical properties of complex mixtures at extreme conditions of pressure and temperature have always been essential to the Liquefied Natural Gas (LNG) industry’s evolution because of the tremendous technical challenges present at all stages in the supply chain from production to liquefaction to transport. Each stage is designed using predictions of the mixture’s properties, such as density, viscosity, surface tension, heat capacity and phase behaviour as a function of temperature, pressure, and composition. Unfortunately, currently available models lead to equipment over-designs of 15% or more. To achieve better designs that work more effectively and/or over a wider range of conditions, new fundamental property data are essential, both to resolve discrepancies in our current predictive capabilities and to extend them to the higher-pressure conditions characteristic of many new gas fields. Furthermore, innovative experimental techniques are required to measure different thermophysical properties at high pressures and over a wide range of temperatures, including near the mixture’s critical points where gas and liquid become indistinguishable and most existing predictive fluid property models used breakdown. In this work, we present a wide range of experimental measurements made for different binary and ternary mixtures relevant to LNG processing, with a particular focus on viscosity, surface tension, heat capacity, bubble-points and density. For this purpose, customized and specialized apparatus were designed and validated over the temperature range (200 to 423) K at pressures to 35 MPa. The mixtures studied were (CH4 + C3H8), (CH4 + C3H8 + CO2) and (CH4 + C3H8 + C7H16); in the last of these the heptane contents was up to 10 mol %. Viscosity was measured using a vibrating wire apparatus, while mixture densities were obtained by means of a high-pressure magnetic-suspension densimeter and an isochoric cell apparatus; the latter was also used to determine bubble-points. Surface tensions were measured using the capillary rise method in a visual cell, which also enabled the location of the mixture critical point to be determined from observations of critical opalescence. Mixture heat capacities were measured using a customised high-pressure differential scanning calorimeter (DSC). The combined standard relative uncertainties were less than 0.3% for density, 2% for viscosity, 3% for heat capacity and 3 % for surface tension. The extensive experimental data gathered in this work were compared with a variety of different advanced engineering models frequently used for predicting thermophysical properties of mixtures relevant to LNG processing. In many cases the discrepancies between the predictions of different engineering models for these mixtures was large, and the high quality data allowed erroneous but often widely-used models to be identified. The data enable the development of new or improved models, to be implemented in process simulation software, so that the fluid properties needed for equipment and process design can be predicted reliably. This in turn will enable reduced capital and operational expenditure by the LNG industry. The current work also aided the community of scientists working to advance theoretical descriptions of fluid properties by allowing to identify deficiencies in theoretical descriptions and calculations.Keywords: LNG, thermophysical, viscosity, density, surface tension, heat capacity, bubble points, models
Procedia PDF Downloads 275754 Accountability of Artificial Intelligence: An Analysis Using Edgar Morin’s Complex Thought
Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan
Abstract:
Artificial intelligence (AI) can be held accountable for its detrimental impacts. This question gains heightened relevance given AI's pervasive reach across various domains, magnifying its power and potential. The expanding influence of AI raises fundamental ethical inquiries, primarily centering on biases, responsibility, and transparency. This encompasses discriminatory biases arising from algorithmic criteria or data, accidents attributed to autonomous vehicles or other systems, and the imperative of transparent decision-making. This article aims to stimulate reflection on AI accountability, denoting the necessity to elucidate the effects it generates. Accountability comprises two integral aspects: adherence to legal and ethical standards and the imperative to elucidate the underlying operational rationale. The objective is to initiate a reflection on the obstacles to this "accountability," facing the challenges of the complexity of artificial intelligence's system and its effects. Then, this article proposes to mobilize Edgar Morin's complex thought to encompass and face the challenges of this complexity. The first contribution is to point out the challenges posed by the complexity of A.I., with fractional accountability between a myriad of human and non-human actors, such as software and equipment, which ultimately contribute to the decisions taken and are multiplied in the case of AI. Accountability faces three challenges resulting from the complexity of the ethical issues combined with the complexity of AI. The challenge of the non-neutrality of algorithmic systems as fully ethically non-neutral actors is put forward by a revealing ethics approach that calls for assigning responsibilities to these systems. The challenge of the dilution of responsibility is induced by the multiplicity and distancing between the actors. Thus, a dilution of responsibility is induced by a split in decision-making between developers, who feel they fulfill their duty by strictly respecting the requests they receive, and management, which does not consider itself responsible for technology-related flaws. Accountability is confronted with the challenge of transparency of complex and scalable algorithmic systems, non-human actors self-learning via big data. A second contribution involves leveraging E. Morin's principles, providing a framework to grasp the multifaceted ethical dilemmas and subsequently paving the way for establishing accountability in AI. When addressing the ethical challenge of biases, the "hologrammatic" principle underscores the imperative of acknowledging the non-ethical neutrality of algorithmic systems inherently imbued with the values and biases of their creators and society. The "dialogic" principle advocates for the responsible consideration of ethical dilemmas, encouraging the integration of complementary and contradictory elements in solutions from the very inception of the design phase. Aligning with the principle of organizing recursiveness, akin to the "transparency" of the system, it promotes a systemic analysis to account for the induced effects and guides the incorporation of modifications into the system to rectify deviations and reintroduce modifications into the system to rectify its drifts. In conclusion, this contribution serves as an inception for contemplating the accountability of "artificial intelligence" systems despite the evident ethical implications and potential deviations. Edgar Morin's principles, providing a lens to contemplate this complexity, offer valuable perspectives to address these challenges concerning accountability.Keywords: accountability, artificial intelligence, complexity, ethics, explainability, transparency, Edgar Morin
Procedia PDF Downloads 67753 Switching Studies on Ge15In5Te56Ag24 Thin Films
Authors: Diptoshi Roy, G. Sreevidya Varma, S. Asokan, Chandasree Das
Abstract:
Germanium Telluride based quaternary thin film switching devices with composition Ge15In5Te56Ag24, have been deposited in sandwich geometry on glass substrate with aluminum as top and bottom electrodes. The bulk glassy form of the said composition is prepared by melt quenching technique. In this technique, appropriate quantity of elements with high purity are taken in a quartz ampoule and sealed under a vacuum of 10-5 mbar. Then, it is allowed to rotate in a horizontal rotary furnace for 36 hours to ensure homogeneity of the melt. After that, the ampoule is quenched into a mixture of ice - water and NaOH to get the bulk ingot of the sample. The sample is then coated on a glass substrate using flash evaporation technique at a vacuum level of 10-6 mbar. The XRD report reveals the amorphous nature of the thin film sample and Energy - Dispersive X-ray Analysis (EDAX) confirms that the film retains the same chemical composition as that of the base sample. Electrical switching behavior of the device is studied with the help of Keithley (2410c) source-measure unit interfaced with Lab VIEW 7 (National Instruments). Switching studies, mainly SET (changing the state of the material from amorphous to crystalline) operation is conducted on the thin film form of the sample. This device is found to manifest memory switching as the device remains 'ON' even after the removal of the electric field. Also it is found that amorphous Ge15In5Te56Ag24 thin film unveils clean memory type of electrical switching behavior which can be justified by the absence of fluctuation in the I-V characteristics. The I-V characteristic also reveals that the switching is faster in this sample as no data points could be seen in the negative resistance region during the transition to on state and this leads to the conclusion of fast phase change during SET process. Scanning Electron Microscopy (SEM) studies are performed on the chosen sample to study the structural changes at the time of switching. SEM studies on the switched Ge15In5Te56Ag24 sample has shown some morphological changes at the place of switching wherein it can be explained that a conducting crystalline channel is formed in the device when the device switches from high resistance to low resistance state. From these studies it can be concluded that the material may find its application in fast switching Non-Volatile Phase Change Memory (PCM) Devices.Keywords: Chalcogenides, Vapor deposition, Electrical switching, PCM.
Procedia PDF Downloads 378752 The Infiltration Interface Structure of Suburban Landscape Forms in Bimen Township, Anji, Zhejiang Province, China
Abstract:
Coordinating and promoting urban and rural development has been a new round of institutional change in Zhejiang province since 2004. And this plan was fully implemented, which showed that the isolation between the urban and rural areas had gradually diminished. Little by little, an infiltration interface that is dynamic, flexible and interactive is formed, and this morphological structure starts to appear on the landscape form in the surrounding villages. In order to study the specific function and formation of the structure in the context of industrial revolution, Bimen village located on the interface between Anji Township, Huzhou and Yuhang District, Hangzhou is taken as the case. Anji township is in the cross area between Yangtze River delta economic circle and innovation center in Hangzhou. Awarded with ‘Chinese beautiful village’, Bimen has witnessed the growing process of infiltration in ecology, economy, technology and culture on the interface. Within the opportunity, Bimen village presents internal reformation to adapt to the energy exchange with urban areas. In the research, the reformation is to adjust the industrial structure, to upgrade the local special bamboo crafts, to release space for activities, and to establish infrastructures on the interface. The characteristic of an interface is elasticity achieved by introducing an Internet platform using ‘O2O’ agriculture method to connect cities and farmlands. There is a platform of this kind in Bimen named ‘Xiao Mei’. ‘Xiao’ in Chinese means small, ‘Mei’ means beautiful, which indicates the method to refine the landscape form. It turns out that the new agriculture mode will strengthen the interface by orienting the Third Party Platform upon the old dynamic basis and will bring new vitality for economy development in Bimen village. The research concludes opportunities and challenges generated by the evolution of the infiltration interface. It also proposes strategies for how to organically adapt to the urbanization process. Finally it demonstrates what will happen by increasing flexibility in the landscape forms of suburbs in the Bimen village.Keywords: Bimen village, infiltration interface, flexibility, suburban landscape form
Procedia PDF Downloads 380751 The Need for Automation in the Domestic Food Processing Sector and its Impact
Authors: Shantam Gupta
Abstract:
The objective of this study is to address the critical need for automation in the domestic food processing sector and study its impact. Food is the one of the most basic physiological needs essential for the survival of a living being. Some of them have the capacity to prepare their own food (like most plants) and henceforth are designated as primary food producers; those who depend on these primary food producers for food form the primary consumers’ class (herbivores). Some of the organisms relying on the primary food are the secondary food consumers (carnivores). There is a third class of consumers called tertiary food consumers/apex food consumers that feed on both the primary and secondary food consumers. Humans form an essential part of the apex predators and are generally at the top of the food chain. But still further disintegration of the food habits of the modern human i.e. Homo sapiens, reveals that humans depend on other individuals for preparing their own food. The old notion of eating raw/brute food is long gone and food processing has become very trenchant in lives of modern human. This has led to an increase in dependence on other individuals for ‘processing’ the food before it can be actually consumed by the modern human. This has led to a further shift of humans in the classification of food chain of consumers. The effects of the shifts shall be systematically investigated in this paper. The processing of food has a direct impact on the economy of the individual (consumer). Also most individuals depend on other processing individuals for the preparation of food. This dependency leads to establishment of a vital link of dependency in the food web which when altered can adversely affect the food web and can have dire consequences on the health of the individual. This study investigates the challenges arising out due to this dependency and the impact of food processing on the economy of the individual. A comparison of Industrial food processing and processing at domestic platforms (households and restaurants) has been made to provide an idea about the present scenario of automation in the food processing sector. A lot of time and energy is also consumed while processing food at home for consumption. The high frequency of consumption of meals (greater than 2 times a day) makes it even more laborious. Through the medium of this study a pressing need for development of an automatic cooking machine is proposed with a mission to reduce the inter-dependency & human effort of individuals required for the preparation of food (by automation of the food preparation process) and make them more self-reliant The impact of development of this product has also further been profoundly discussed. Assumption used: The individuals those who process food also consume the food that they produce. (They are also termed as ‘independent’ or ‘self-reliant’ modern human beings.)Keywords: automation, food processing, impact on economy, processing individual
Procedia PDF Downloads 472750 Design and Construction of a Home-Based, Patient-Led, Therapeutic, Post-Stroke Recovery System Using Iterative Learning Control
Authors: Marco Frieslaar, Bing Chu, Eric Rogers
Abstract:
Stroke is a devastating illness that is the second biggest cause of death in the world (after heart disease). Where it does not kill, it leaves survivors with debilitating sensory and physical impairments that not only seriously harm their quality of life, but also cause a high incidence of severe depression. It is widely accepted that early intervention is essential for recovery, but current rehabilitation techniques largely favor hospital-based therapies which have restricted access, expensive and specialist equipment and tend to side-step the emotional challenges. In addition, there is insufficient funding available to provide the long-term assistance that is required. As a consequence, recovery rates are poor. The relatively unexplored solution is to develop therapies that can be harnessed in the home and are formulated from technologies that already exist in everyday life. This would empower individuals to take control of their own improvement and provide choice in terms of when and where they feel best able to undertake their own healing. This research seeks to identify how effective post-stroke, rehabilitation therapy can be applied to upper limb mobility, within the physical context of a home rather than a hospital. This is being achieved through the design and construction of an automation scheme, based on iterative learning control and the Riener muscle model, that has the ability to adapt to the user and react to their level of fatigue and provide tangible physical recovery. It utilizes a SMART Phone and laptop to construct an iterative learning control (ILC) system, that monitors upper arm movement in three dimensions, as a series of exercises are undertaken. The equipment generates functional electrical stimulation to assist in muscle activation and thus improve directional accuracy. In addition, it monitors speed, accuracy, areas of motion weakness and similar parameters to create a performance index that can be compared over time and extrapolated to establish an independent and objective assessment scheme, plus an approximate estimation of predicted final outcome. To further extend its assessment capabilities, nerve conduction velocity readings are taken by the software, between the shoulder and hand muscles. This is utilized to measure the speed of response of neuron signal transfer along the arm and over time, an online indication of regeneration levels can be obtained. This will prove whether or not sufficient training intensity is being achieved even before perceivable movement dexterity is observed. The device also provides the option to connect to other users, via the internet, so that the patient can avoid feelings of isolation and can undertake movement exercises together with others in a similar position. This should create benefits not only for the encouragement of rehabilitation participation, but also an emotional support network potential. It is intended that this approach will extend the availability of stroke recovery options, enable ease of access at a low cost, reduce susceptibility to depression and through these endeavors, enhance the overall recovery success rate.Keywords: home-based therapy, iterative learning control, Riener muscle model, SMART phone, stroke rehabilitation
Procedia PDF Downloads 267749 The Neuroscience Dimension of Juvenile Law Effectuates a Comprehensive Treatment of Youth in the Criminal System
Authors: Khushboo Shah
Abstract:
Categorical bans on the death penalty and life-without-parole sentences for juvenile offenders in a growing number of countries have established a new era in juvenile jurisprudence. This has been brought about by integration of the growing knowledge in cognitive neuroscience and appreciation of the inherent differences between adults and adolescents over the last ten years. This evolving understanding of being a child in the criminal system can be aptly reflected through policies that incorporate the mitigating traits of youth. First, the presentation will delineate the structures in cognitive neuroscience and in particular, focus on the prefrontal cortex, the amygdala, and the basal ganglia. These key anatomical structures in the brain are linked to three mitigating adolescent traits—an underdeveloped sense of responsibility, an increased vulnerability to negative influences, and transitory personality traits—that establish why juveniles have a lessened culpability. The discussion will delve into the details depicting how an underdeveloped prefrontal cortex results in the heightened emotional angst, high-energy and risky behavior characteristic of the adolescent time period or how the amygdala, the emotional center of the brain, governs different emotional expression resulting in why teens are susceptible to negative influences. Based on this greater understanding, it is incumbent that policies adequately reflect the adolescent physiology and psychology in the criminal system. However, it is important to ensure that these views are appropriately weighted while considering the jurisprudence for the treatment of children in the law. To ensure this balance is appropriately stricken, policies must incorporate the distinctive traits of youth in sentencing and legal considerations and yet refrain from the potential fallacies of absolving a juvenile offender of guilt and culpability. Accordingly, three policies will demonstrate how these results can be achieved: (1) eliminate housing of juvenile offenders in the adult prison system, (2) mandate fitness hearings for all transfers of juveniles to adult criminal court, and (3) use the post-disposition review as a type of rehabilitation method for juvenile offenders. Ultimately, this interdisciplinary approach of science and law allows for a better understanding of adolescent psychological and social functioning and can effectuate better legal outcomes for juveniles tried as adults.Keywords: criminal law, Juvenile Justice, interdisciplinary, neuroscience
Procedia PDF Downloads 331748 Aerosol Radiative Forcing Over Indian Subcontinent for 2000-2021 Using Satellite Observations
Authors: Shreya Srivastava, Sushovan Ghosh, Sagnik Dey
Abstract:
Aerosols directly affect Earth’s radiation budget by scattering and absorbing incoming solar radiation and outgoing terrestrial radiation. While the uncertainty in aerosol radiative forcing (ARF) has decreased over the years, it is still higher than that of greenhouse gas forcing, particularly in the South Asian region, due to high heterogeneity in their chemical properties. Understanding the Spatio-temporal heterogeneity of aerosol composition is critical in improving climate prediction. Studies using satellite data, in-situ and aircraft measurements, and models have investigated the Spatio-temporal variability of aerosol characteristics. In this study, we have taken aerosol data from Multi-angle Imaging Spectro-Radiometer (MISR) level-2 version 23 aerosol products retrieved at 4.4 km and radiation data from Clouds and the Earth’s Radiant Energy System (CERES, spatial resolution=1ox1o) for 21 years (2000-2021) over the Indian subcontinent. MISR aerosol product includes size and shapes segregated aerosol optical depth (AOD), Angstrom exponent (AE), and single scattering albedo (SSA). Additionally, 74 aerosol mixtures are included in version 23 data that is used for aerosol speciation. We have seasonally mapped aerosol optical and microphysical properties from MISR for India at quarter degrees resolution. Results show strong Spatio-temporal variability, with a constant higher value of AOD for the Indo-Gangetic Plain (IGP). The contribution of small-size particles is higher throughout the year, spatially during winter months. SSA is found to be overestimated where absorbing particles are present. The climatological map of short wave (SW) ARF at the top of the atmosphere (TOA) shows a strong cooling except in only a few places (values ranging from +2.5o to -22.5o). Cooling due to aerosols is higher in the absence of clouds. Higher negative values of ARF are found over the IGP region, given the high aerosol concentration above the region. Surface ARF values are everywhere negative for our study domain, with higher values in clear conditions. The results strongly correlate with AOD from MISR and ARF from CERES.Keywords: aerosol Radiative forcing (ARF), aerosol composition, single scattering albedo (SSA), CERES
Procedia PDF Downloads 57747 Integrating the Modbus SCADA Communication Protocol with Elliptic Curve Cryptography
Authors: Despoina Chochtoula, Aristidis Ilias, Yannis Stamatiou
Abstract:
Modbus is a protocol that enables the communication among devices which are connected to the same network. This protocol is, often, deployed in connecting sensor and monitoring units to central supervisory servers in Supervisory Control and Data Acquisition, or SCADA, systems. These systems monitor critical infrastructures, such as factories, power generation stations, nuclear power reactors etc. in order to detect malfunctions and ignite alerts and corrective actions. However, due to their criticality, SCADA systems are vulnerable to attacks that range from simple eavesdropping on operation parameters, exchanged messages, and valuable infrastructure information to malicious modification of vital infrastructure data towards infliction of damage. Thus, the SCADA research community has been active over strengthening SCADA systems with suitable data protection mechanisms based, to a large extend, on cryptographic methods for data encryption, device authentication, and message integrity protection. However, due to the limited computation power of many SCADA sensor and embedded devices, the usual public key cryptographic methods are not appropriate due to their high computational requirements. As an alternative, Elliptic Curve Cryptography has been proposed, which requires smaller key sizes and, thus, less demanding cryptographic operations. Until now, however, no such implementation has been proposed in the SCADA literature, to the best of our knowledge. In order to fill this gap, our methodology was focused on integrating Modbus, a frequently used SCADA communication protocol, with Elliptic Curve based cryptography and develop a server/client application to demonstrate the proof of concept. For the implementation we deployed two C language libraries, which were suitably modify in order to be successfully integrated: libmodbus (https://github.com/stephane/libmodbus) and ecc-lib https://www.ceid.upatras.gr/webpages/faculty/zaro/software/ecc-lib/). The first library provides a C implementation of the Modbus/TCP protocol while the second one offers the functionality to develop cryptographic protocols based on Elliptic Curve Cryptography. These two libraries were combined, after suitable modifications and enhancements, in order to give a modified version of the Modbus/TCP protocol focusing on the security of the data exchanged among the devices and the supervisory servers. The mechanisms we implemented include key generation, key exchange/sharing, message authentication, data integrity check, and encryption/decryption of data. The key generation and key exchange protocols were implemented with the use of Elliptic Curve Cryptography primitives. The keys established by each device are saved in their local memory and are retained during the whole communication session and are used in encrypting and decrypting exchanged messages as well as certifying entities and the integrity of the messages. Finally, the modified library was compiled for the Android environment in order to run the server application as an Android app. The client program runs on a regular computer. The communication between these two entities is an example of the successful establishment of an Elliptic Curve Cryptography based, secure Modbus wireless communication session between a portable device acting as a supervisor station and a monitoring computer. Our first performance measurements are, also, very promising and demonstrate the feasibility of embedding Elliptic Curve Cryptography into SCADA systems, filling in a gap in the relevant scientific literature.Keywords: elliptic curve cryptography, ICT security, modbus protocol, SCADA, TCP/IP protocol
Procedia PDF Downloads 281746 Microplastic Concentrations and Fluxes in Urban Compartments: A Systemic Approach at the Scale of the Paris Megacity
Authors: Rachid Dris, Robin Treilles, Max Beaurepaire, Minh Trang Nguyen, Sam Azimi, Vincent Rocher, Johnny Gasperi, Bruno Tassin
Abstract:
Microplastic sources and fluxes in urban catchments are only poorly studied. Most often, the approaches taken focus on a single source and only carry out a description of the contamination levels and type (shape, size, polymers). In order to gain an improved knowledge of microplastic inputs at urban scales, estimating and comparing various fluxes is necessary. The Laboratoire Eau, Environnement et Systèmes Urbains (LEESU), the Laboratoire Eau Environnement (LEE) and the SIAAP (Service public de l’assainissement francilien) initiated several projects to investigate different urban sources and flows of microplastics. A systemic approach is undertaken at the scale of Paris Megacity, and several compartments are considered, including atmospheric fallout, wastewater treatments plants, runoff and combined sewer overflows. These investigations are carried out within the Limnoplast and OPUR projects. Atmospheric fallout was sampled during consecutive periods ranging from 2 to 3 weeks with a stainless-steel funnel. Both wet and dry periods were considered. Different treatment steps were sampled in 2 wastewater treatment plants (Seine-Amont for activated sludge and Seine-Centre for biofiltration) of the SIAAP, including sludge samples. Microplastics were also investigated in combined sewer overflows as well as in stormwater at the outlet suburban catchment (Sucy-en-Brie, France) during four rain events. Samples are treated using hydroperoxide digestion (H₂O₂ 30 %) in order to reduce organic material. Microplastics are then extracted from the samples with a density separation step using NaI (d=1.6 g.cm⁻³). Samples are filtered on metallic filters with a porosity of 14 µm between steps to separate them from the solutions (H₂O₂ and NaI). The last filtration was carried out on alumina filters. Infrared mapping analysis (using a micro-FTIR with an MCT detector) is performed on each alumina filter. The resulting maps are analyzed using a microplastic analysis software simple, developed by Aalborg University, Denmark and Alfred Wegener Institute, Germany. Blanks were systematically carried out to consider sample contamination. This presentation aims at synthesizing the data found in the various projects. In order to carry out a systemic approach and compare the various inputs, all the data were converted into annual microplastic fluxes (number of microplastics per year), and extrapolated to the Parisian agglomeration. PP, PE and alkyd are the most prevalent polymers found in storm water samples. Rain intensity and microplastic concentrations did not show any clear correlation. Considering the runoff volumes and the impervious surface area of the studied catchment, a flux of 4*107–9*107 MPs.yr⁻¹.ha⁻¹ was estimated. Samples of wastewater treatment plants and atmospheric fallout are currently being analyzed in order to finalize this assessment. The representativeness of such samplings and uncertainties related to the extrapolations will be discussed and gaps in knowledge will be identified. The data provided by such an approach will help to prioritize future research as well as policy efforts.Keywords: microplastics, atmosphere, wastewater, urban runoff, Paris megacity, urban waters
Procedia PDF Downloads 183745 Quantitative Polymerase Chain Reaction Analysis of Phytoplankton Composition and Abundance to Assess Eutrophication: A Multi-Year Study in Twelve Large Rivers across the United States
Authors: Chiqian Zhang, Kyle D. McIntosh, Nathan Sienkiewicz, Ian Struewing, Erin A. Stelzer, Jennifer L. Graham, Jingrang Lu
Abstract:
Phytoplankton plays an essential role in freshwater aquatic ecosystems and is the primary group synthesizing organic carbon and providing food sources or energy to ecosystems. Therefore, the identification and quantification of phytoplankton are important for estimating and assessing ecosystem productivity (carbon fixation), water quality, and eutrophication. Microscopy is the current gold standard for identifying and quantifying phytoplankton composition and abundance. However, microscopic analysis of phytoplankton is time-consuming, has a low sample throughput, and requires deep knowledge and rich experience in microbial morphology to implement. To improve this situation, quantitative polymerase chain reaction (qPCR) was considered for phytoplankton identification and quantification. Using qPCR to assess phytoplankton composition and abundance, however, has not been comprehensively evaluated. This study focused on: 1) conducting a comprehensive performance comparison of qPCR and microscopy techniques in identifying and quantifying phytoplankton and 2) examining the use of qPCR as a tool for assessing eutrophication. Twelve large rivers located throughout the United States were evaluated using data collected from 2017 to 2019 to understand the relation between qPCR-based phytoplankton abundance and eutrophication. This study revealed that temporal variation of phytoplankton abundance in the twelve rivers was limited within years (from late spring to late fall) and among different years (2017, 2018, and 2019). Midcontinent rivers had moderately greater phytoplankton abundance than eastern and western rivers, presumably because midcontinent rivers were more eutrophic. The study also showed that qPCR- and microscope-determined phytoplankton abundance had a significant positive linear correlation (adjusted R² 0.772, p-value < 0.001). In addition, phytoplankton abundance assessed via qPCR showed promise as an indicator of the eutrophication status of those rivers, with oligotrophic rivers having low phytoplankton abundance and eutrophic rivers having (relatively) high phytoplankton abundance. This study demonstrated that qPCR could serve as an alternative tool to traditional microscopy for phytoplankton quantification and eutrophication assessment in freshwater rivers.Keywords: phytoplankton, eutrophication, river, qPCR, microscopy, spatiotemporal variation
Procedia PDF Downloads 104744 Utilization of Informatics to Transform Clinical Data into a Simplified Reporting System to Examine the Analgesic Prescribing Practices of a Single Urban Hospital’s Emergency Department
Authors: Rubaiat S. Ahmed, Jemer Garrido, Sergey M. Motov
Abstract:
Clinical informatics (CI) enables the transformation of data into a systematic organization that improves the quality of care and the generation of positive health outcomes.Innovative technology through informatics that compiles accurate data on analgesic utilization in the emergency department can enhance pain management in this important clinical setting. We aim to establish a simplified reporting system through CI to examine and assess the analgesic prescribing practices in the EDthrough executing a U.S. federal grant project on opioid reduction initiatives. Queried data points of interest from a level-one trauma ED’s electronic medical records were used to create data sets and develop informational/visual reporting dashboards (on Microsoft Excel and Google Sheets) concerning analgesic usage across several pre-defined parameters and performance metrics using CI. The data was then qualitatively analyzed to evaluate ED analgesic prescribing trends by departmental clinicians and leadership. During a 12-month reporting period (Dec. 1, 2020 – Nov. 30, 2021) for the ongoing project, about 41% of all ED patient visits (N = 91,747) were for pain conditions, of which 81.6% received analgesics in the ED and at discharge (D/C). Of those treated with analgesics, 24.3% received opioids compared to 75.7% receiving opioid alternatives in the ED and at D/C, including non-pharmacological modalities. Demographics showed among patients receiving analgesics, 56.7% were aged between 18-64, 51.8% were male, 51.7% were white, and 66.2% had government funded health insurance. Ninety-one percent of all opioids prescribed were in the ED, with intravenous (IV) morphine, IV fentanyl, and morphine sulfate immediate release (MSIR) tablets accounting for 88.0% of ED dispensed opioids. With 9.3% of all opioids prescribed at D/C, MSIR was dispensed 72.1% of the time. Hydrocodone, oxycodone, and tramadol usage to only 10-15% of the time, and hydromorphone at 0%. Of opioid alternatives, non-steroidal anti-inflammatory drugs were utilized 60.3% of the time, 23.5% with local anesthetics and ultrasound-guided nerve blocks, and 7.9% with acetaminophen as the primary non-opioid drug categories prescribed by ED providers. Non-pharmacological analgesia included virtual reality and other modalities. An average of 18.5 ED opioid orders and 1.9 opioid D/C prescriptions per 102.4 daily ED patient visits was observed for the period. Compared to other specialties within our institution, 2.0% of opioid D/C prescriptions are given by ED providers, compared to the national average of 4.8%. Opioid alternatives accounted for 69.7% and 30.3% usage, versus 90.7% and 9.3% for opioids in the ED and D/C, respectively.There is a pressing need for concise, relevant, and reliable clinical data on analgesic utilization for ED providers and leadership to evaluate prescribing practices and make data-driven decisions. Basic computer software can be used to create effective visual reporting dashboards with indicators that convey relevant and timely information in an easy-to-digest manner. We accurately examined our ED's analgesic prescribing practices using CI through dashboard reporting. Such reporting tools can quickly identify key performance indicators and prioritize data to enhance pain management and promote safe prescribing practices in the emergency setting.Keywords: clinical informatics, dashboards, emergency department, health informatics, healthcare informatics, medical informatics, opioids, pain management, technology
Procedia PDF Downloads 147743 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data
Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine
Procedia PDF Downloads 241742 Room Temperature Sensitive Broadband Terahertz Photo Response Using Platinum Telluride Based Devices
Authors: Alka Jakhar, Harmanpreet Kaur Sandhu, Samaresh Das
Abstract:
The Terahertz (THz) technology-based devices are heightening at an alarming rate on account of the wide range of applications in imaging, security, communication, and spectroscopic field. The various available room operational THz detectors, including Golay cell, pyroelectric detector, field-effect transistors, and photoconductive antennas, have some limitations such as narrow-band response, slow response speed, transit time limits, and complex fabrication process. There is an urgent demand to explore new materials and device structures to accomplish efficient THz detection systems. Recently, TMDs including topological semimetals and topological insulators such as PtSe₂, MoTe₂, WSe₂, and PtTe₂ provide novel feasibility for photonic and optical devices. The peculiar properties of these materials, such as Dirac cone, fermions presence, nonlinear optical response, high conductivity, and ambient stability, make them worthy for the development of the THz devices. Here, the platinum telluride (PtTe₂) based devices have been demonstrated for THz detection in the frequency range of 0.1-1 THz. The PtTe₂ is synthesized by direct selenization of the sputtered platinum film on the high-resistivity silicon substrate by using the chemical vapor deposition (CVD) method. The Raman spectra, XRD, and XPS spectra confirm the formation of the thin PtTe₂ film. The PtTe₂ channel length is 5µm and it is connected with a bow-tie antenna for strong THz electric field confinement in the channel. The characterization of the devices has been carried out in a wide frequency range from 0.1-1 THz. The induced THz photocurrent is measured by using lock-in-amplifier after preamplifier. The maximum responsivity is achieved up to 1 A/W under self-biased mode. Further, this responsivity has been increased by applying biasing voltage. This photo response corresponds to low energy THz photons is mainly due to the photo galvanic effect in PtTe₂. The DC current is induced along the PtTe₂ channel, which is directly proportional to the amplitude of the incident THz electric field. Thus, these new topological semimetal materials provide new pathways for sensitive detection and sensing applications in the THz domain.Keywords: terahertz, detector, responsivity, topological-semimetals
Procedia PDF Downloads 168741 Extraction of Nutraceutical Bioactive Compounds from the Native Algae Using Solvents with a Deep Natural Eutectic Point and Ultrasonic-assisted Extraction
Authors: Seyedeh Bahar Hashemi, Alireza Rahimi, Mehdi Arjmand
Abstract:
Food is the source of energy and growth through the breakdown of its vital components and plays a vital role in human health and nutrition. Many natural compounds found in plant and animal materials play a special role in biological systems and the origin of many such compounds directly or indirectly is algae. Algae is an enormous source of polysaccharides and have gained much interest in human flourishing. In this study, algae biomass extraction is conducted using deep eutectic-based solvents (NADES) and Ultrasound-assisted extraction (UAE). The aim of this research is to extract bioactive compounds including total carotenoid, antioxidant activity, and polyphenolic contents. For this purpose, the influence of three important extraction parameters namely, biomass-to-solvent ratio, temperature, and time are studied with respect to their impact on the recovery of carotenoids, and phenolics, and on the extracts’ antioxidant activity. Here we employ the Response Surface Methodology for the process optimization. The influence of the independent parameters on each dependent is determined through Analysis of Variance. Our results show that Ultrasound-assisted extraction (UAE) for 50 min is the best extraction condition, and proline:lactic acid (1:1) and choline chloride:urea (1:2) extracts show the highest total phenolic contents (50.00 ± 0.70 mgGAE/gdw) and antioxidant activity [60.00 ± 1.70 mgTE/gdw, 70.00 ± 0.90 mgTE/gdw in 2.2-diphenyl-1-picrylhydrazyl (DPPH), and 2.2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS)]. Our results confirm that the combination of UAE and NADES provides an excellent alternative to organic solvents for sustainable and green extraction and has huge potential for use in industrial applications involving the extraction of bioactive compounds from algae. This study is among the first attempts to optimize the effects of ultrasonic-assisted extraction, ultrasonic devices, and deep natural eutectic point and investigate their application in bioactive compounds extraction from algae. We also study the future perspective of ultrasound technology which helps to understand the complex mechanism of ultrasonic-assisted extraction and further guide its application in algae.Keywords: natural deep eutectic solvents, ultrasound-assisted extraction, algae, antioxidant activity, phenolic compounds, carotenoids
Procedia PDF Downloads 188740 High Cycle Fatigue Analysis of a Lower Hopper Knuckle Connection of a Large Bulk Carrier under Dynamic Loading
Authors: Vaso K. Kapnopoulou, Piero Caridis
Abstract:
The fatigue of ship structural details is of major concern in the maritime industry as it can generate fracture issues that may compromise structural integrity. In the present study, a fatigue analysis of the lower hopper knuckle connection of a bulk carrier was conducted using the Finite Element Method by means of ABAQUS/CAE software. The fatigue life was calculated using Miner’s Rule and the long-term distribution of stress range by the use of the two-parameter Weibull distribution. The cumulative damage ratio was estimated using the fatigue damage resulting from the stress range occurring at each load condition. For this purpose, a cargo hold model was first generated, which extends over the length of two holds (the mid-hold and half of each of the adjacent holds) and transversely over the full breadth of the hull girder. Following that, a submodel of the area of interest was extracted in order to calculate the hot spot stress of the connection and to estimate the fatigue life of the structural detail. Two hot spot locations were identified; one at the top layer of the inner bottom plate and one at the top layer of the hopper plate. The IACS Common Structural Rules (CSR) require that specific dynamic load cases for each loading condition are assessed. Following this, the dynamic load case that causes the highest stress range at each loading condition should be used in the fatigue analysis for the calculation of the cumulative fatigue damage ratio. Each load case has a different effect on ship hull response. Of main concern, when assessing the fatigue strength of the lower hopper knuckle connection, was the determination of the maximum, i.e. the critical value of the stress range, which acts in a direction normal to the weld toe line. This acts in the transverse direction, that is, perpendicularly to the ship's centerline axis. The load cases were explored both theoretically and numerically in order to establish the one that causes the highest damage to the location examined. The most severe one was identified to be the load case induced by beam sea condition where the encountered wave comes from the starboard. At the level of the cargo hold model, the model was assumed to be simply supported at its ends. A coarse mesh was generated in order to represent the overall stiffness of the structure. The elements employed were quadrilateral shell elements, each having four integration points. A linear elastic analysis was performed because linear elastic material behavior can be presumed, since only localized yielding is allowed by most design codes. At the submodel level, the displacements of the analysis of the cargo hold model to the outer region nodes of the submodel acted as boundary conditions and applied loading for the submodel. In order to calculate the hot spot stress at the hot spot locations, a very fine mesh zone was generated and used. The fatigue life of the detail was found to be 16.4 years which is lower than the design fatigue life of the structure (25 years), making this location vulnerable to fatigue fracture issues. Moreover, the loading conditions that induce the most damage to the location were found to be the various ballasting conditions.Keywords: dynamic load cases, finite element method, high cycle fatigue, lower hopper knuckle
Procedia PDF Downloads 419739 Composition Dependence of Ni 2p Core Level Shift in Fe1-xNix Alloys
Authors: Shakti S. Acharya, V. R. R. Medicherla, Rajeev Rawat, Komal Bapna, Deepnarayan Biswas, Khadija Ali, K. Maiti
Abstract:
The discovery of invar effect in 35% Ni concentration Fe1-xNix alloy has stimulated enormous experimental and theoretical research. Elemental Fe and low Ni concentration Fe1-xNix alloys which possess body centred cubic (bcc) crystal structure at ambient temperature and pressure transform to hexagonally close packed (hcp) phase at around 13 GPa. Magnetic order was found to be absent at 11K for Fe92Ni8 alloy when subjected to a high pressure of 26 GPa. The density functional theoretical calculations predicted substantial hyperfine magnetic fields, but were not observed in Mossbaur spectroscopy. The bulk modulus of fcc Fe1-xNix alloys with Ni concentration more than 35%, is found to be independent of pressure. The magnetic moment of Fe is also found be almost same in these alloys from 4 to 10 GPa pressure. Fe1-xNix alloys exhibit a complex microstructure which is formed by a series of complex phase transformations like martensitic transformation, spinodal decomposition, ordering, mono-tectoid reaction, eutectoid reaction at temperatures below 400°C. Despite the existence of several theoretical models the field is still in its infancy lacking full knowledge about the anomalous properties exhibited by these alloys. Fe1-xNix alloys have been prepared by arc melting the high purity constituent metals in argon ambient. These alloys have annealed at around 3000C in vacuum sealed quartz tube for two days to make the samples homogeneous. These alloys have been structurally characterized by x-ray diffraction and were found to exhibit a transition from bcc to fcc for x > 0.3. Ni 2p core levels of the alloys have been measured using high resolution (0.45 eV) x-ray photoelectron spectroscopy. Ni 2p core level shifts to lower binding energy with respect to that of pure Ni metal giving rise to negative core level shifts (CLSs). Measured CLSs exhibit a linear dependence in fcc region (x > 0.3) and were found to deviate slightly in bcc region (x < 0.3). ESCA potential model fails correlate CLSs with site potentials or charges in metallic alloys. CLSs in these alloys occur mainly due to shift in valence bands with composition due to intra atomic charge redistribution.Keywords: arc melting, core level shift, ESCA potential model, valence band
Procedia PDF Downloads 383738 The Incidence of Obesity among Adult Women in Pekanbaru City, Indonesia, Related to High Fat Consumption, Stress Level, and Physical Activity
Authors: Yudia Mailani Putri, Martalena Purba, B. J. Istiti Kandarina
Abstract:
Background: Obesity has been recognized as a global health problem. Individuals classified as overweight and obese are increasing at an alarming rate. This condition is associated with psychological and physiological problems. as a person reaches adulthood, somatic growth ceases. At this stage, the human body has developed fully, to a stable state. As the capital of Riau Province in Indonesia, Pekanbaru is dominated by Malay ethnic population habitually consuming cholesterol-rich fatty foods as a daily menu, a trigger to the onset of obesity resulting in high prevalence of degenerative diseases. Research objectives: The aim of this study is elaborating the relationship between high-fat consumption pattern, stress level, physical activity and the incidence of obesity in adult women in Pekanbaru city. Research Methods: Among the combined research methods applied in this study, the first stage is quantitative observational, analytical cross-sectional research design with adult women aged 20-40 living in Pekanbaru city. The sample consists of 200 women with BMI≥25. Sample data is processed with univariate, bivariate (correlation and simple linear regression) and multivariate (multiple linear regression) analysis. The second phase is qualitative descriptive study purposive sampling by in-depth interviews. six participants withdrew from the study. Results: According to the results of the bivariate analysis, there are relationships between the incidence of obesity and the pattern of high fat foods consumption (energy intake (p≤0.000; r = 0.536), protein intake (p≤0.000; r=0.307), fat intake (p≤0.000; r=0.416), carbohydrate intake (p≤0.000; r=0.430), frequency of fatty food consumption (p≤0.000; r=0.506) and frequency of viscera foods consumption (p≤0.000; r=0.535). There is a relationship between physical activity and incidence of obesity (p≤0.000; r=-0.631). However, there is no relationship between the level of stress (p=0.741; r=0.019-) and the incidence of obesity. Physical activity is a predominant factor in the incidence of obesity in adult women in Pekanbaru city. Conclusion: There are relationships between high-fat food consumption pattern, physical activity and the incidence of obesity in Pekanbaru city whereas physical activity is a predominant factor in the occurrence of obesity, supported by the unchangeable pattern of high-fat foods consumption.Keywords: obesity, adult, high in fat, stress, physical activity, consumption pattern
Procedia PDF Downloads 236737 Recovery of Physical Performance in Postpartum Women: An Effective Physical Education Program
Authors: Julia A. Ermakova
Abstract:
This study aimed to investigate the efficacy of a physical rehabilitation program for postpartum women. The program was developed with the purpose of restoring physical performance in women during the postpartum period. The research employed a variety of methods, including an analysis of scientific literature, pedagogical testing and experimentation, mathematical processing of study results, and physical performance assessment using a range of tests. The program recommends refraining from abdominal exercises during the first 6-8 months following a cesarean section and avoiding exercises with weights. Instead, a feasible training regimen that gradually increases in intensity several times a week is recommended, along with moderate cardio exercises such as walking, bodyweight training, and a separate workout component that targets posture improvement. Stretching after strength training is also encouraged. The necessary equipment includes comfortable sports attire with a chest support top, mat, push-ups, resistance band, timer, and clock. The motivational aspect of the program is paramount, and the mentee's positive experience with the workout regimen includes feelings of lightness in the body, increased energy, and positive emotions. The gradual reduction of body size and weight loss due to an improved metabolism also serves as positive reinforcement. The mentee's progress can be measured through various means, including an external assessment of her form, body measurements, weight, BMI, and the presence or absence of slouching in everyday life. The findings of this study reveal that the program is effective in restoring physical performance in postpartum women. The mentee achieved weight loss and almost regained her pre-pregnancy shape while her self-esteem improved. Her waist, shoulder, and hip measurements decreased, and she displayed less slouching in her daily life. In conclusion, the developed physical rehabilitation program for postpartum women is an effective means of restoring physical performance. It is crucial to follow the recommended training regimen and equipment to avoid limitations and ensure safety during the postpartum period. The motivational component of the program is also fundamental in encouraging positive reinforcement and improving self-esteem.Keywords: physical rehabilitation, postpartum, methodology, postpartum recovery, rehabilitation
Procedia PDF Downloads 78736 Establishment of Diagnostic Reference Levels for Computed Tomography Examination at the University of Ghana Medical Centre
Authors: Shirazu Issahaku, Isaac Kwesi Acquah, Simon Mensah Amoh, George Nunoo
Abstract:
Introduction: Diagnostic Reference Levels are important indicators for monitoring and optimizing protocol and procedure in medical imaging between facilities and equipment. This helps to evaluate whether, in routine clinical conditions, the median value obtained for a representative group of patients within an agreed range from a specified procedure is unusually high or low for that procedure. This study aimed to propose Diagnostic Reference Levels for Computed Tomography examination of the most common routine examination of the head, chest and abdominal pelvis regions at the University of Ghana Medical Centre. Methods: The Diagnostic Reference Levels were determined based on the investigation of the most common routine examinations, including head Computed Tomography examination with and without contrast, abdominopelvic Computed Tomography examination with and without contrast, and chest Computed Tomography examination without contrast. The study was based on two dose indicators: the volumetric Computed Tomography Dose Index and Dose-Length Product. Results: The estimated median distribution for head Computed Tomography with contrast for volumetric-Computed Tomography dose index and Dose-Length Product were 38.33 mGy and 829.35 mGy.cm, while without contrast, were 38.90 mGy and 860.90 mGy.cm respectively. For an abdominopelvic Computed Tomography examination with contrast, the estimated volumetric-Computed Tomography dose index and Dose-Length Product values were 40.19 mGy and 2096.60 mGy.cm. In the absence of contrast, the calculated values were 14.65 mGy and 800.40 mGy.cm, respectively. Additionally, for chest Computed Tomography examination, the estimated values were 12.75 mGy and 423.95 mGy.cm for volumetric-Computed Tomography dose index and Dose-Length Product, respectively. These median values represent the proposed diagnostic reference values of the head, chest, and abdominal pelvis regions. Conclusions: The proposed Diagnostic Reference Level is comparable to the recommended International Atomic Energy Agency and International Commission Radiation Protection Publication 135 and other regional published data by the European Commission and Regional National Diagnostic Reference Level in Africa. These reference levels will serve as benchmarks to guide clinicians in optimizing radiation dose levels while ensuring accurate diagnostic image quality at the facility.Keywords: diagnostic reference levels, computed tomography dose index, computed tomography, radiation exposure, dose-length product, radiation protection
Procedia PDF Downloads 66735 Future Research on the Resilience of Tehran’s Urban Areas Against Pandemic Crises Horizon 2050
Authors: Farzaneh Sasanpour, Saeed Amini Varaki
Abstract:
Resilience is an important goal for cities as urban areas face an increasing range of challenges in the 21st century; therefore, according to the characteristics of risks, adopting an approach that responds to sensitive conditions in the risk management process is the resilience of cities. In the meantime, most of the resilience assessments have dealt with natural hazards and less attention has been paid to pandemics.In the covid-19 pandemic, the country of Iran and especially the metropolis of Tehran, was not immune from the crisis caused by its effects and consequences and faced many challenges. One of the methods that can increase the resilience of Tehran's metropolis against possible crises in the future is future studies. This research is practical in terms of type. The general pattern of the research will be descriptive-analytical and from the point of view that it is trying to communicate between the components and provide urban resilience indicators with pandemic crises and explain the scenarios, its future studies method is exploratory. In order to extract and determine the key factors and driving forces effective on the resilience of Tehran's urban areas against pandemic crises (Covid-19), the method of structural analysis of mutual effects and Micmac software was used. Therefore, the primary factors and variables affecting the resilience of Tehran's urban areas were set in 5 main factors, including physical-infrastructural (transportation, spatial and physical organization, streets and roads, multi-purpose development) with 39 variables based on mutual effects analysis. Finally, key factors and variables in five main areas, including managerial-institutional with five variables; Technology (intelligence) with 3 variables; economic with 2 variables; socio-cultural with 3 variables; and physical infrastructure, were categorized with 7 variables. These factors and variables have been used as key factors and effective driving forces on the resilience of Tehran's urban areas against pandemic crises (Covid-19), in explaining and developing scenarios. In order to develop the scenarios for the resilience of Tehran's urban areas against pandemic crises (Covid-19), intuitive logic, scenario planning as one of the future research methods and the Global Business Network (GBN) model were used. Finally, four scenarios have been drawn and selected with a creative method using the metaphor of weather conditions, which is indicative of the general outline of the conditions of the metropolis of Tehran in that situation. Therefore, the scenarios of Tehran metropolis were obtained in the form of four scenarios: 1- solar scenario (optimal governance and management leading in smart technology) 2- cloud scenario (optimal governance and management following in intelligent technology) 3- dark scenario (optimal governance and management Unfavorable leader in intelligence technology) 4- Storm scenario (unfavorable governance and management of follower in intelligence technology). The solar scenario shows the best situation and the stormy scenario shows the worst situation for the Tehran metropolis. According to the findings obtained in this research, city managers can, in order to achieve a better tomorrow for the metropolis of Tehran, in all the factors and components of urban resilience against pandemic crises by using future research methods, a coherent picture with the long-term horizon of 2050, from the path Provide urban resilience movement and platforms for upgrading and increasing the capacity to deal with the crisis. To create the necessary platforms for the realization, development and evolution of the urban areas of Tehran in a way that guarantees long-term balance and stability in all dimensions and levels.Keywords: future research, resilience, crisis, pandemic, covid-19, Tehran
Procedia PDF Downloads 73