Search results for: force measurements
50 Predictive Analytics for Theory Building
Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim
Abstract:
Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building
Procedia PDF Downloads 27649 Partial Discharge Characteristics of Free- Moving Particles in HVDC-GIS
Authors: Philipp Wenger, Michael Beltle, Stefan Tenbohlen, Uwe Riechert
Abstract:
The integration of renewable energy introduces new challenges to the transmission grid, as the power generation is located far from load centers. The associated necessary long-range power transmission increases the demand for high voltage direct current (HVDC) transmission lines and DC distribution grids. HVDC gas-insulated switchgears (GIS) are considered being a key technology, due to the combination of the DC technology and the long operation experiences of AC-GIS. To ensure long-term reliability of such systems, insulation defects must be detected in an early stage. Operational experience with AC systems has proven evidence, that most failures, which can be attributed to breakdowns of the insulation system, can be detected and identified via partial discharge (PD) measurements beforehand. In AC systems the identification of defects relies on the phase resolved partial discharge pattern (PRPD). Since there is no phase information within DC systems this method cannot be transferred to DC PD diagnostic. Furthermore, the behaviour of e.g. free-moving particles differs significantly at DC: Under the influence of a constant direct electric field, charge carriers can accumulate on particles’ surfaces. As a result, a particle can lift-off, oscillate between the inner conductor and the enclosure or rapidly bounces at just one electrode, which is known as firefly motion. Depending on the motion and the relative position of the particle to the electrodes, broadband electromagnetic PD pulses are emitted, which can be recorded by ultra-high frequency (UHF) measuring methods. PDs are often accompanied by light emissions at the particle’s tip which enables optical detection. This contribution investigates PD characteristics of free moving metallic particles in a commercially available 300 kV SF6-insulated HVDC-GIS. The influences of various defect parameters on the particle motion and the PD characteristic are evaluated experimentally. Several particle geometries, such as cylinder, lamella, spiral and sphere with different length, diameter and weight are determined. The applied DC voltage is increased stepwise from inception voltage up to UDC = ± 400 kV. Different physical detection methods are used simultaneously in a time-synchronized setup. Firstly, the electromagnetic waves emitted by the particle are recorded by an UHF measuring system. Secondly, a photomultiplier tube (PMT) detects light emission with a wavelength in the range of λ = 185…870 nm. Thirdly, a high-speed camera (HSC) tracks the particle’s motion trajectory with high accuracy. Furthermore, an electrically insulated electrode is attached to the grounded enclosure and connected to a current shunt in order to detect low frequency ion currents: The shunt measuring system’s sensitivity is in the range of 10 nA at a measuring bandwidth of bw = DC…1 MHz. Currents of charge carriers, which are generated at the particle’s tip migrate through the gas gap to the electrode and can be recorded by the current shunt. All recorded PD signals are analyzed in order to identify characteristic properties of different particles. This includes e.g. repetition rates and amplitudes of successive pulses, characteristic frequency ranges and detected signal energy of single PD pulses. Concluding, an advanced understanding of underlying physical phenomena particle motion in direct electric field can be derived.Keywords: current shunt, free moving particles, high-speed imaging, HVDC-GIS, UHF
Procedia PDF Downloads 16048 Wear Resistance in Dry and Lubricated Conditions of Hard-anodized EN AW-4006 Aluminum Alloy
Authors: C. Soffritti, A. Fortini, E. Baroni, M. Merlin, G. L. Garagnani
Abstract:
Aluminum alloys are widely used in many engineering applications due to their advantages such ashigh electrical and thermal conductivities, low density, high strength to weight ratio, and good corrosion resistance. However, their low hardness and poor tribological properties still limit their use in industrial fields requiring sliding contacts. Hard anodizing is one of the most common solution for overcoming issues concerning the insufficient friction resistance of aluminum alloys. In this work, the tribological behavior ofhard-anodized AW-4006 aluminum alloys in dry and lubricated conditions was evaluated. Three different hard-anodizing treatments were selected: a conventional one (HA) and two innovative golden hard-anodizing treatments (named G and GP, respectively), which involve the sealing of the porosity of anodic aluminum oxides (AAO) with silver ions at different temperatures. Before wear tests, all AAO layers were characterized by scanning electron microscopy (VPSEM/EDS), X-ray diffractometry, roughness (Ra and Rz), microhardness (HV0.01), nanoindentation, and scratch tests. Wear tests were carried out according to the ASTM G99-17 standard using a ball-on-disc tribometer. The tests were performed in triplicate under a 2 Hz constant frequency oscillatory motion, a maximum linear speed of 0.1 m/s, normal loads of 5, 10, and 15 N, and a sliding distance of 200 m. A 100Cr6 steel ball10 mm in diameter was used as counterpart material. All tests were conducted at room temperature, in dry and lubricated conditions. Considering the more recent regulations about the environmental hazard, four bio-lubricants were considered after assessing their chemical composition (in terms of Unsaturation Number, UN) and viscosity: olive, peanut, sunflower, and soybean oils. The friction coefficient was provided by the equipment. The wear rate of anodized surfaces was evaluated by measuring the cross-section area of the wear track with a non-contact 3D profilometer. Each area value, obtained as an average of four measurements of cross-section areas along the track, was used to determine the wear volume. The worn surfaces were analyzed by VPSEM/EDS. Finally, in agreement with DoE methodology, a statistical analysis was carried out to identify the most influencing factors on the friction coefficients and wear rates. In all conditions, results show that the friction coefficient increased with raising the normal load. Considering the wear tests in dry sliding conditions, irrespective of the type of anodizing treatments, metal transfer between the mating materials was observed over the anodic aluminum oxides. During sliding at higher loads, the detachment of the metallic film also caused the delamination of some regions of the wear track. For the wear tests in lubricated conditions, the natural oils with high percentages of oleic acid (i.e., olive and peanut oils) maintained high friction coefficients and low wear rates. Irrespective of the type of oil, smallmicrocraks were visible over the AAO layers. Based on the statistical analysis, the type of anodizing treatment and magnitude of applied load were the main factors of influence on the friction coefficient and wear rate values. Nevertheless, an interaction between bio-lubricants and load magnitude could occur during the tests.Keywords: hard anodizing treatment, silver ions, bio-lubricants, sliding wear, statistical analysis
Procedia PDF Downloads 15047 A Descriptive Study on Water Scarcity as a One Health Challenge among the Osiram Community, Kajiado County, Kenya
Authors: Damiano Omari, Topirian Kerempe, Dibo Sama, Walter Wafula, Sharon Chepkoech, Chrispine Juma, Gilbert Kirui, Simon Mburu, Susan Keino
Abstract:
The One Health concept was officially adopted by the international organizations and scholarly bodies in 1984. It aims at combining human, animal and environmental components to address global health challenges. Using collaborative efforts optimal health to people, animals, and the environment can be achieved. One health approach plays a significant approach role in prevention and control of zoonosis diseases. It has also been noted that 75% of new emerging human infectious diseases are zoonotic. In Kenya, one health has been embraced and strongly advocated for by One Health East and Central Africa (OHCEA). It was inaugurated on 17th of October 2010 at a historic meeting facilitated by USAID with participants from 7 public health schools, seven faculties of veterinary medicine in Eastern Africa and 2 American universities (Tufts and University of Minnesota) in addition to respond project staff. The study was conducted in Loitoktok Sub County, specifically in the Amboseli Ecosystem. The Amboseli ecosystem covers an area of 5,700 square kilometers and stretches between Mt. Kilimanjaro, Chyulu Hills, Tsavo West National park and the Kenya/Tanzania border. The area is arid to semi-arid and is more suitable for pastoralism with a high potential for conservation of wildlife and tourism enterprises. The ecosystem consists of the Amboseli National Park, which is surrounded by six group ranches which include Kimana, Olgulului, Selengei, Mbirikani, Kuku and Rombo in Loitoktok District. The Manyatta of study was Osiram Cultural Manyatta in Mbirikani group ranch. Apart from visiting the Manyatta, we also visited the sub-county hospital, slaughter slab, forest service, Kimana market, and the Amboseli National Park. The aim of the study was to identify the one health issues facing the community. This was done by a conducting a community needs assessment and prioritization. Different methods were used in data collection for the qualitative and numerical data. They include among others; key informant interviews and focus group discussions. We also guided the community members in drawing their Resource Map this helped identify the major resources in their land and also help them identify some of the issues they were facing. Matrix piling, root cause analysis, and force field analysis tools were used to establish the one health related priority issues facing community members. Skits were also used to present to the community interventions to the major one health issues. Some of the prioritized needs among the community were water scarcity and inadequate markets for their beadwork. The group intervened on the various needs of the Manyatta. For water scarcity, we educated the community on water harvesting methods using gutters as well as proper storage by the use of tanks and earth dams. The community was also encouraged to recycle and conserve water. To improve markets; we educated the community to upload their products online, a page was opened for them and uploading the photos was demonstrated to them. They were also encouraged to be innovative to attract more clients.Keywords: Amboseli ecosystem, community interventions, community needs assessment and prioritization, one health issues
Procedia PDF Downloads 16946 Salicornia bigelovii, a Promising Halophyte for Biosaline Agriculture: Lessons Learned from a 4-Year Field Study in United Arab Emirates
Authors: Dionyssia Lyra, Shoaib Ismail
Abstract:
Salinization of natural resources constitutes a significant component of the degradation force that leads to depletion of productive lands and fresh water reserves. The global extent of salt-affected soils is approximately 7% of the earth’s land surface and is expanding. The problems of excessive salt accumulation are most widespread in coastal, arid and semi-arid regions, where agricultural production is substantially hindered. The use of crops that can withstand high saline conditions is extremely interesting in such a context. Salt-loving plants or else ‘halophytes’ thrive when grown in hostile saline conditions, where traditional crops cannot survive. Salicornia bigelovii, a halophytic crop with multiple uses (vegetable, forage, biofuel), has demonstrated remarkable adaptability to harsh climatic conditions prevailing in dry areas with great potential for its expansion. Since 2011, the International Center for Biosaline Agriculture (ICBA) with Masdar Institute (MI) and King Abdul Aziz University of Science & Technology (KAUST) to look into the potential for growing S. bigelovii under hot and dry conditions. Through the projects undertaken, 50 different S. bigelovii genotypes were assessed under high saline conditions. The overall goal was to select the best performing S. bigelovii populations in terms of seed and biomass production for future breeding. Specific objectives included: 1) evaluation of selected S. bigelovii genotypes for various agronomic and growth parameters under field conditions, 2) seed multiplication of S. bigelovii using saline groundwater and 3) acquisition of inbred lines for further breeding. Field trials were conducted for four consecutive years at ICBA headquarters. During the first year, one Salicornia population was evaluated for seed and biomass production at different salinity levels, fertilizer treatments and planting methods. All growth parameters and biomass productivity for the salicornia population showed better performance with optimal biomass production in terms of both salinity level and fertilizer application. During the second year, 46 Salicornia populations (obtained from KAUST and Masdar Institute) were evaluated for 24 growth parameters and treated with groundwater through drip irrigation. The plant material originated from wild collections. Six populations were also assessed for their growth performance under full-strength seawater. Salicornia populations were highly variable for all characteristics under study for both irrigation treatments, indicating that there is a large pool of genetic information available for breeding. Irrigation with the highest level of salinity had a negative impact on the agronomic performance. The maximum seed yield obtained was 2 t/ha at 20 dS/m (groundwater treatment) at 25 cm x 25 cm planting distance. The best performing Salicornia populations for fresh biomass and seed yield were selected for the following season. After continuous selection, the best performing salicornia will be adopted for scaling-up options. Taking into account the results of the production field trials, salicornia expansion will be targeted in coastal areas of the Arabian Peninsula. As a crop with high biofuel and forage potential, its cultivation can improve the livelihood of local farmers.Keywords: biosaline agriculture, genotypes selection, halophytes, Salicornia bigelovii
Procedia PDF Downloads 40745 Enhanced Dielectric and Ferroelectric Properties in Holmium Substituted Stoichiometric and Non-Stoichiometric SBT Ferroelectric Ceramics
Authors: Sugandha Gupta, Arun Kumar Jha
Abstract:
A large number of ferroelectric materials have been intensely investigated for applications in non-volatile ferroelectric random access memories (FeRAMs), piezoelectric transducers, actuators, pyroelectric sensors, high dielectric constant capacitors, etc. Bismuth layered ferroelectric materials such as Strontium Bismuth Tantalate (SBT) has attracted a lot of attention due to low leakage current, high remnant polarization and high fatigue endurance up to 1012 switching cycles. However, pure SBT suffers from various major limitations such as high dielectric loss, low remnant polarization values, high processing temperature, bismuth volatilization, etc. Significant efforts have been made to improve the dielectric and ferroelectric properties of this compound. Firstly, it has been reported that electrical properties vary with the Sr/ Bi content ratio in the SrBi2Ta2O9 compsition i.e. non-stoichiometric compositions with Sr-deficient / Bi excess content have higher remnant polarization values than stoichiometic SBT compositions. With the objective to improve structural, dielectric, ferroelectric and piezoelectric properties of SBT compound, rare earth holmium (Ho3+) was chosen as a donor cation for substitution onto the Bi2O2 layer. Moreover, hardly any report on holmium substitution in stoichiometric SrBi2Ta2O9 and non-stoichiometric Sr0.8Bi2.2Ta2O9 compositions were available in the literature. The holmium substituted SrBi2-xHoxTa2O9 (x= 0.00-2.0) and Sr0.8Bi2.2Ta2O9 (x=0.0 and 0.01) compositions were synthesized by the solid state reaction method. The synthesized specimens were characterized for their structural and electrical properties. X-ray diffractograms reveal single phase layered perovskite structure formation for holmium content in stoichiometric SBT samples up to x ≤ 0.1. The granular morphology of the samples was investigated using scanning electron microscope (Hitachi, S-3700 N). The dielectric measurements were carried out using a precision LCR meter (Agilent 4284A) operating at oscillation amplitude of 1V. The variation of dielectric constant with temperature shows that the Curie temperature (Tc) decreases on increasing the holmium content. The specimen with x=2.0 i.e. the bismuth free specimen, has very low dielectric constant and does not show any appreciable variation with temperature. The dielectric loss reduces significantly with holmium substitution. The polarization–electric field (P–E) hysteresis loops were recorded using a P–E loop tracer based on Sawyer–Tower circuit. It is observed that the ferroelectric property improve with Ho substitution. Holmium substituted specimen exhibits enhanced value of remnant polarization (Pr= 9.22 μC/cm²) as compared to holmium free specimen (Pr= 2.55 μC/cm²). Piezoelectric co-efficient (d33 values) was measured using a piezo meter system (Piezo Test PM300). It is observed that holmium substitution enhances piezoelectric coefficient. Further, the optimized holmium content (x=0.01) in stoichiometric SrBi2-xHoxTa2O9 composition has been substituted in non-stoichiometric Sr0.8Bi2.2Ta2O9 composition to obtain further enhanced structural and electrical characteristics. It is expected that a new class of ferroelectric materials i.e. Rare Earth Layered Structured Ferroelectrics (RLSF) derived from Bismuth Layered Structured Ferroelectrics (BLSF) will generate which can be used to replace static (SRAM) and dynamic (DRAM) random access memories with ferroelectric random access memories (FeRAMS).Keywords: dielectrics, ferroelectrics, piezoelectrics, strontium bismuth tantalate
Procedia PDF Downloads 20944 Transport Hubs as Loci of Multi-Layer Ecosystems of Innovation: Case Study of Airports
Authors: Carolyn Hatch, Laurent Simon
Abstract:
Urban mobility and the transportation industry are undergoing a transformation, shifting from an auto production-consumption model that has dominated since the early 20th century towards new forms of personal and shared multi-modality [1]. This is shaped by key forces such as climate change, which has induced a shift in production and consumption patterns and efforts to decarbonize and improve transport services through, for instance, the integration of vehicle automation, electrification and mobility sharing [2]. Advanced innovation practices and platforms for experimentation and validation of new mobility products and services that are increasingly complex and multi-stakeholder-oriented are shaping this new world of mobility. Transportation hubs – such as airports - are emblematic of these disruptive forces playing out in the mobility industry. Airports are emerging as the core of innovation ecosystems on and around contemporary mobility issues, and increasingly recognized as complex public/private nodes operating in many societal dimensions [3,4]. These include urban development, sustainability transitions, digital experimentation, customer experience, infrastructure development and data exploitation (for instance, airports generate massive and often untapped data flows, with significant potential for use, commercialization and social benefit). Yet airport innovation practices have not been well documented in the innovation literature. This paper addresses this gap by proposing a model of airport innovation that aims to equip airport stakeholders to respond to these new and complex innovation needs in practice. The methodology involves: 1 – a literature review bringing together key research and theory on airport innovation management, open innovation and innovation ecosystems in order to evaluate airport practices through an innovation lens; 2 – an international benchmarking of leading airports and their innovation practices, including such examples as Aéroports de Paris, Schipol in Amsterdam, Changi in Singapore, and others; and 3 – semi-structured interviews with airport managers on key aspects of organizational practice, facilitated through a close partnership with the Airport Council International (ACI), a major stakeholder in this research project. Preliminary results find that the most successful airports are those that have shifted to a multi-stakeholder, platform ecosystem model of innovation. The recent entrance of new actors in airports (Google, Amazon, Accor, Vinci, Airbnb and others) have forced the opening of organizational boundaries to share and exchange knowledge with a broader set of ecosystem players. This has also led to new forms of governance and intermediation by airport actors to connect complex, highly distributed knowledge, along with new kinds of inter-organizational collaboration, co-creation and collective ideation processes. Leading airports in the case study have demonstrated a unique capacity to force traditionally siloed activities to “think together”, “explore together” and “act together”, to share data, contribute expertise and pioneer new governance approaches and collaborative practices. In so doing, they have successfully integrated these many disruptive change pathways and forced their implementation and coordination towards innovative mobility outcomes, with positive societal, environmental and economic impacts. This research has implications for: 1 - innovation theory, 2 - urban and transport policy, and 3 - organizational practice - within the mobility industry and across the economy.Keywords: airport management, ecosystem, innovation, mobility, platform, transport hubs
Procedia PDF Downloads 18143 An Autonomous Passive Acoustic System for Detection, Tracking and Classification of Motorboats in Portofino Sea
Authors: A. Casale, J. Alessi, C. N. Bianchi, G. Bozzini, M. Brunoldi, V. Cappanera, P. Corvisiero, G. Fanciulli, D. Grosso, N. Magnoli, A. Mandich, C. Melchiorre, C. Morri, P. Povero, N. Stasi, M. Taiuti, G. Viano, M. Wurtz
Abstract:
This work describes a real-time algorithm for detecting, tracking and classifying single motorboats, developed using the acoustic data recorded by a hydrophone array within the framework of EU LIFE + project ARION (LIFE09NAT/IT/000190). The project aims to improve the conservation status of bottlenose dolphins through a real-time simultaneous monitoring of their population and surface ship traffic. A Passive Acoustic Monitoring (PAM) system is installed on two autonomous permanent marine buoys, located close to the boundaries of the Marine Protected Area (MPA) of Portofino (Ligurian Sea- Italy). Detecting surface ships is also a necessity in many other sensible areas, such as wind farms, oil platforms, and harbours. A PAM system could be an effective alternative to the usual monitoring systems, as radar or active sonar, for localizing unauthorized ship presence or illegal activities, with the advantage of not revealing its presence. Each ARION buoy consists of a particular type of structure, named meda elastica (elastic beacon) composed of a main pole, about 30-meter length, emerging for 7 meters, anchored to a mooring of 30 tons at 90 m depth by an anti-twist steel wire. Each buoy is equipped with a floating element and a hydrophone tetrahedron array, whose raw data are send via a Wi-Fi bridge to a ground station where real-time analysis is performed. Bottlenose dolphin detection algorithm and ship monitoring algorithm are operating in parallel and in real time. Three modules were developed and commissioned for ship monitoring. The first is the detection algorithm, based on Time Difference Of Arrival (TDOA) measurements, i.e., the evaluation of angular direction of the target respect to each buoy and the triangulation for obtaining the target position. The second is the tracking algorithm, based on a Kalman filter, i.e., the estimate of the real course and speed of the target through a predictor filter. At last, the classification algorithm is based on the DEMON method, i.e., the extraction of the acoustic signature of single vessels. The following results were obtained; the detection algorithm succeeded in evaluating the bearing angle with respect to each buoy and the position of the target, with an uncertainty of 2 degrees and a maximum range of 2.5 km. The tracking algorithm succeeded in reconstructing the real vessel courses and estimating the speed with an accuracy of 20% respect to the Automatic Identification System (AIS) signals. The classification algorithm succeeded in isolating the acoustic signature of single vessels, demonstrating its temporal stability and the consistency of both buoys results. As reference, the results were compared with the Hilbert transform of single channel signals. The algorithm for tracking multiple targets is ready to be developed, thanks to the modularity of the single ship algorithm: the classification module will enumerate and identify all targets present in the study area; for each of them, the detection module and the tracking module will be applied to monitor their course.Keywords: acoustic-noise, bottlenose-dolphin, hydrophone, motorboat
Procedia PDF Downloads 17342 Effective Emergency Response and Disaster Prevention: A Decision Support System for Urban Critical Infrastructure Management
Authors: M. Shahab Uddin, Pennung Warnitchai
Abstract:
Currently more than half of the world’s populations are living in cities, and the number and sizes of cities are growing faster than ever. Cities rely on the effective functioning of complex and interdependent critical infrastructures networks to provide public services, enhance the quality of life, and save the community from hazards and disasters. In contrast, complex connectivity and interdependency among the urban critical infrastructures bring management challenges and make the urban system prone to the domino effect. Unplanned rapid growth, increased connectivity, and interdependency among the infrastructures, resource scarcity, and many other socio-political factors are affecting the typical state of an urban system and making it susceptible to numerous sorts of diversion. In addition to internal vulnerabilities, urban systems are consistently facing external threats from natural and manmade hazards. Cities are not just complex, interdependent system, but also makeup hubs of the economy, politics, culture, education, etc. For survival and sustainability, complex urban systems in the current world need to manage their vulnerabilities and hazardous incidents more wisely and more interactively. Coordinated management in such systems makes for huge potential when it comes to absorbing negative effects in case some of its components were to function improperly. On the other hand, ineffective management during a similar situation of overall disorder from hazards devastation may make the system more fragile and push the system to an ultimate collapse. Following the quantum, the current research hypothesizes that a hazardous event starts its journey as an emergency, and the system’s internal vulnerability and response capacity determine its destination. Connectivity and interdependency among the urban critical infrastructures during this stage may transform its vulnerabilities into dynamic damaging force. An emergency may turn into a disaster in the absence of effective management; similarly, mismanagement or lack of management may lead the situation towards a catastrophe. Situation awareness and factual decision-making is the key to win a battle. The current research proposed a contextual decision support system for an urban critical infrastructure system while integrating three different models: 1) Damage cascade model which demonstrates damage propagation among the infrastructures through their connectivity and interdependency, 2) Restoration model, a dynamic restoration process of individual infrastructure, which is based on facility damage state and overall disruptions in surrounding support environment, and 3) Optimization model that ensures optimized utilization and distribution of available resources in and among the facilities. All three models are tightly connected, mutually interdependent, and together can assess the situation and forecast the dynamic outputs of every input. Moreover, this integrated model will hold disaster managers and decision makers responsible when it comes to checking all the alternative decision before any implementation, and support to produce maximum possible outputs from the available limited inputs. This proposed model will not only support to reduce the extent of damage cascade but will ensure priority restoration and optimize resource utilization through adaptive and collaborative management. Complex systems predictably fail but in unpredictable ways. System understanding, situation awareness, and factual decisions may significantly help urban system to survive and sustain.Keywords: disaster prevention, decision support system, emergency response, urban critical infrastructure system
Procedia PDF Downloads 22741 In Vitro Intestine Tissue Model to Study the Impact of Plastic Particles
Authors: Ashleigh Williams
Abstract:
Micro- and nanoplastics’ (MNLPs) omnipresence and ecological accumulation is evident when surveying recent environmental impact studies. For example, in 2014 it was estimated that at least 52.3 trillion plastic microparticles are floating at sea, and scientists have even found plastics present remote Arctic ice and snow (5,6). Plastics have even found their way into precipitation, with more than 1000 tons of microplastic rain precipitating onto the Western United States in 2020. Even more recent studies evaluating the chemical safety of reusable plastic bottles found that hundreds of chemicals leached into the control liquid in the bottle (ddH2O, ph = 7) during a 24-hour time period. A consequence of the increased abundance in plastic waste in the air, land, and water every year is the bioaccumulation of MNLPs in ecosystems and trophic niches of the animal food chain, which could potentially cause increased direct and indirect exposure of humans to MNLPs via inhalation, ingestion, and dermal contact. Though the detrimental, toxic effects of MNLPs have been established in marine biota, much less is known about the potentially hazardous health effects of chronic MNLP ingestion in humans. Recent data indicate that long-term exposure to MNLPs could cause possible inflammatory and dysbiotic effects. However, toxicity seems to be largely dose-, as well as size-dependent. In addition, the transcytotic uptake of MNLPs through the intestinal epithelia in humans remain relatively unknown. To this point, the goal of the current study was to investigate the mechanisms of micro- and nanoplastic uptake and transcytosis of Polystyrene (PE) in human stem-cell derived, physiologically relevant in vitro intestinal model systems, and to compare the relative effect of particle size (30 nm, 100 nm, 500 nm and 1 µm), and concentration (0 µg/mL, 250 µg/mL, 500 µg/mL, 1000 µg/mL) on polystyrene MNLP uptake, transcytosis and intestinal epithelial model integrity. Observational and quantitative data obtained from confocal microscopy, immunostaining, transepithelial electrical resistance (TEER) measurements, cryosectioning, and ELISA cytokine assays of the proinflammatory cytokines Interleukin-6 and Interleukin-8 were used to evaluate the localization and transcytosis of polystyrene MNPs and its impact on epithelial integrity in human-derived intestinal in vitro model systems. The effect of Microfold (M) cell induction on polystyrene micro- and nanoparticle (MNP) uptake, transcytosis, and potential inflammation was also assessed and compared to samples grown under standard conditions. Microfold (M) cells, link the human intestinal system to the immune system and are the primary cells in the epithelium responsible for sampling and transporting foreign matter of interest from the lumen of the gut to underlying immune cells. Given the uptake capabilities of Microfold cells to interact both specifically and nonspecific to abiotic and biotic materials, it was expected that M- cell induced in vitro samples would have increased binding, localization, and potentially transcytosis of Polystyrene MNLPs across the epithelial barrier. Experimental results of this study would not only help in the evaluation of the plastic toxicity, but would allow for more detailed modeling of gut inflammation and the intestinal immune system.Keywords: nanoplastics, enteroids, intestinal barrier, tissue engineering, microfold (M) cells
Procedia PDF Downloads 8540 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College
Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa
Abstract:
This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling
Procedia PDF Downloads 23139 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data
Authors: Nicola Colaninno, Eugenio Morello
Abstract:
The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing
Procedia PDF Downloads 19438 Effect of Thermal Treatment on Mechanical Properties of Reduced Activation Ferritic/Martensitic Eurofer Steel Grade
Authors: Athina Puype, Lorenzo Malerba, Nico De Wispelaere, Roumen Petrov, Jilt Sietsma
Abstract:
Reduced activation ferritic/martensitic (RAFM) steels like EUROFER97 are primary candidate structural materials for first wall application in the future demonstration (DEMO) fusion reactor. Existing steels of this type obtain their functional properties by a two-stage heat treatment, which consists of an annealing stage at 980°C for thirty minutes followed by quenching and an additional tempering stage at 750°C for two hours. This thermal quench and temper (Q&T) treatment creates a microstructure of tempered martensite with, as main precipitates, M23C6 carbides, with M = Fe, Cr and carbonitrides of MX type, e.g. TaC and VN. The resulting microstructure determines the mechanical properties of the steel. The ductility is largely determined by the tempered martensite matrix, while the resistance to mechanical degradation, determined by the spatial and size distribution of precipitates and the martensite crystals, plays a key role in the high temperature properties of the steel. Unfortunately, the high temperature response of EUROFER97 is currently insufficient for long term use in fusion reactors, due to instability of the matrix phase and coarsening of the precipitates at prolonged high temperature exposure. The objective of this study is to induce grain refinement by appropriate modifications of the processing route in order to increase the high temperature strength of a lab-cast EUROFER RAFM steel grade. The goal of the work is to obtain improved mechanical behavior at elevated temperatures with respect to conventionally heat treated EUROFER97. A dilatometric study was conducted to study the effect of the annealing temperature on the mechanical properties after a Q&T treatment. The microstructural features were investigated with scanning electron microscopy (SEM), electron back-scattered diffraction (EBSD) and transmission electron microscopy (TEM). Additionally, hardness measurements, tensile tests at elevated temperatures and Charpy V-notch impact testing of KLST-type MCVN specimens were performed to study the mechanical properties of the furnace-heated lab-cast EUROFER RAFM steel grade. A significant prior austenite grain (PAG) refinement was obtained by lowering the annealing temperature of the conventionally used Q&T treatment for EUROFER97. The reduction of the PAG results in finer martensitic constituents upon quenching, which offers more nucleation sites for carbide and carbonitride formation upon tempering. The ductile-to-brittle transition temperature (DBTT) was found to decrease with decreasing martensitic block size. Additionally, an increased resistance against high temperature degradation was accomplished in the fine grained martensitic materials with smallest precipitates obtained by tailoring the annealing temperature of the Q&T treatment. It is concluded that the microstructural refinement has a pronounced effect on the DBTT without significant loss of strength and ductility. Further investigation into the optimization of the processing route is recommended to improve the mechanical behavior of RAFM steels at elevated temperatures.Keywords: ductile-to-brittle transition temperature (DBTT), EUROFER, reduced activation ferritic/martensitic (RAFM) steels, thermal treatments
Procedia PDF Downloads 29937 Critiquing Israel as Child Abuse: How Colonial White Feminism Disrupts Critical Pedagogies of Culturally Responsive and Relevant Practices and Inclusion through Ongoing and Historical Maternalism and Neoliberal Settler Colonialism
Authors: Wafaa Hasan
Abstract:
In May of 2022, Palestinian parents in Toronto, Canada, became aware that educators and staff in the Toronto District School Board were attempting to include the International Holocaust and Remembrance Definition of Antisemitism (IHRA) in The Child Abuse and Neglect Policy of the largest school board in Canada, The Toronto District School Board (TDSB). The idea was that if students were to express any form of antisemitism, as defined by the IHRA, then an investigation could follow with Child Protective Services (CPS). That is, the student’s parents could be reported to the state and investigated for custodial rights to their children. The TDSB has set apparent goals for “Decolonizing Pedagogy” (“TDSB Equity Leadership Competencies”), Culturally Responsive and Relevant Practices (CRRP) and inclusive education. These goals promote the centering of colonized, racialized and marginalized voices. CRRP cannot be effective without the application of anti-racist and settler colonial analyses. In order for CRRP to be effective, school boards need a comprehensive understanding of the ways in which the vilification of Palestinians operates through anti-indigenous and white supremacist systems and logic. Otherwise, their inclusion will always be in tension with the inclusion of settler colonial agendas and worldviews. Feminist maternalism frames racial mothering as degenerate (viewing the contributions of racialized students and their parents as products of primitive and violent cultures) and also indirectly inhibits the actualization of the tenets of CRRP and inclusive education through its extensions into the welfare state and public education. The contradiction between the tenets of CRRP and settler colonial systems of erasure and repression is resolved by the continuation of tactics to 1) force assimilation, 2) punish those who push back on that assimilation and 3) literally fragment familial and community structures of racialized students, educators and parents. This paper draws on interdisciplinary (history, philosophy, anthropology) critiques of white feminist “maternalism” from the 19th century onwards in North America and Europe (Jacobs, Weber), as well as “anti-racist education” theory (Dei), and more specifically,” culturally responsive learning,” (Muhammad) and “bandwidth” pedagogy theory (Verschelden) to make its claims. This research contributes to vibrant debates about anti-racist and decolonial pedagogies in public education systems globally. This paper also documents first-hand interviews and experiences of diasporic Palestinian mothers and motherhoods and situates their experiences within longstanding histories of white feminist maternalist (and eugenicist) politics. This informal qualitative data from "participatory conversations" (Swain) is situated within a set of formal interview data collected with Palestinian women in the West Bank (approved by the McMaster University Humanities Research Ethics Board) relating to white feminist maternalism in the peace and dialogue industry.Keywords: decolonial feminism, maternal feminism, anti-racist pedagogies, settler colonial studies, motherhood studies, pedagogy theory, cultural theory
Procedia PDF Downloads 7336 Detection of Mustard Traces in Food by an Official Food Safety Laboratory
Authors: Clara Tramuta, Lucia Decastelli, Elisa Barcucci, Sandra Fragassi, Samantha Lupi, Enrico Arletti, Melissa Bizzarri, Daniela Manila Bianchi
Abstract:
Introdution: Food allergies occurs, in the Western World, 2% of adults and up to 8% of children. The protection of allergic consumers is guaranted, in Eurrope, by Regulation (EU) No 1169/2011 of the European Parliament which governs the consumer's right to information and identifies 14 food allergens to be mandatory indicated on the label. Among these, mustard is a popular spice added to enhance the flavour and taste of foods. It is frequently present as an ingredient in spice blends, marinades, salad dressings, sausages, and other products. Hypersensitivity to mustard is a public health problem since the ingestion of even low amounts can trigger severe allergic reactions. In order to protect the allergic consumer, high performance methods are required for the detection of allergenic ingredients. Food safety laboratories rely on validated methods that detect hidden allergens in food to ensure the safety and health of allergic consumers. Here we present the test results for the validation and accreditation of a Real time PCR assay (RT-PCR: SPECIALfinder MC Mustard, Generon), for the detection of mustard traces in food. Materials and Methods. The method was tested on five classes of food matrices: bakery and pastry products (chocolate cookies), meats (ragù), ready-to-eat (mixed salad), dairy products (yogurt), grains, and milling products (rice and barley flour). Blank samples were spiked starting with the mustard samples (Sinapis Alba), lyophilized and stored at -18 °C, at a concentration of 1000 ppm. Serial dilutions were then prepared to a final concentration of 0.5 ppm, using the DNA extracted by ION Force FAST (Generon) from the blank samples. The Real Time PCR reaction was performed by RT-PCR SPECIALfinder MC Mustard (Generon), using CFX96 System (BioRad). Results. Real Time PCR showed a limit of detection (LOD) of 0.5 ppm in grains and milling products, ready-to-eat, meats, bakery, pastry products, and dairy products (range Ct 25-34). To determine the exclusivity parameter of the method, the ragù matrix was contaminated with Prunus dulcis (almonds), peanut (Arachis hypogaea), Glycine max (soy), Apium graveolens (celery), Allium cepa (onion), Pisum sativum (peas), Daucus carota (carrots), and Theobroma cacao (cocoa) and no cross-reactions were observed. Discussion. In terms of sensitivity, the Real Time PCR confirmed, even in complex matrix, a LOD of 0.5 ppm in five classes of food matrices tested; these values are compatible with the current regulatory situation that does not consider, at international level, to establish a quantitative criterion for the allergen considered in this study. The Real Time PCR SPECIALfinder kit for the detection of mustard proved to be easy to use and particularly appreciated for the rapid response times considering that the amplification and detection phase has a duration of less than 50 minutes. Method accuracy was rated satisfactory for sensitivity (100%) and specificity (100%) and was fully validated and accreditated. It was found adequate for the needs of the laboratory as it met the purpose for which it was applied. This study was funded in part within a project of the Italian Ministry of Health (IZS PLV 02/19 RC).Keywords: allergens, food, mustard, real time PCR
Procedia PDF Downloads 16635 High Pressure Thermophysical Properties of Complex Mixtures Relevant to Liquefied Natural Gas (LNG) Processing
Authors: Saif Al Ghafri, Thomas Hughes, Armand Karimi, Kumarini Seneviratne, Jordan Oakley, Michael Johns, Eric F. May
Abstract:
Knowledge of the thermophysical properties of complex mixtures at extreme conditions of pressure and temperature have always been essential to the Liquefied Natural Gas (LNG) industry’s evolution because of the tremendous technical challenges present at all stages in the supply chain from production to liquefaction to transport. Each stage is designed using predictions of the mixture’s properties, such as density, viscosity, surface tension, heat capacity and phase behaviour as a function of temperature, pressure, and composition. Unfortunately, currently available models lead to equipment over-designs of 15% or more. To achieve better designs that work more effectively and/or over a wider range of conditions, new fundamental property data are essential, both to resolve discrepancies in our current predictive capabilities and to extend them to the higher-pressure conditions characteristic of many new gas fields. Furthermore, innovative experimental techniques are required to measure different thermophysical properties at high pressures and over a wide range of temperatures, including near the mixture’s critical points where gas and liquid become indistinguishable and most existing predictive fluid property models used breakdown. In this work, we present a wide range of experimental measurements made for different binary and ternary mixtures relevant to LNG processing, with a particular focus on viscosity, surface tension, heat capacity, bubble-points and density. For this purpose, customized and specialized apparatus were designed and validated over the temperature range (200 to 423) K at pressures to 35 MPa. The mixtures studied were (CH4 + C3H8), (CH4 + C3H8 + CO2) and (CH4 + C3H8 + C7H16); in the last of these the heptane contents was up to 10 mol %. Viscosity was measured using a vibrating wire apparatus, while mixture densities were obtained by means of a high-pressure magnetic-suspension densimeter and an isochoric cell apparatus; the latter was also used to determine bubble-points. Surface tensions were measured using the capillary rise method in a visual cell, which also enabled the location of the mixture critical point to be determined from observations of critical opalescence. Mixture heat capacities were measured using a customised high-pressure differential scanning calorimeter (DSC). The combined standard relative uncertainties were less than 0.3% for density, 2% for viscosity, 3% for heat capacity and 3 % for surface tension. The extensive experimental data gathered in this work were compared with a variety of different advanced engineering models frequently used for predicting thermophysical properties of mixtures relevant to LNG processing. In many cases the discrepancies between the predictions of different engineering models for these mixtures was large, and the high quality data allowed erroneous but often widely-used models to be identified. The data enable the development of new or improved models, to be implemented in process simulation software, so that the fluid properties needed for equipment and process design can be predicted reliably. This in turn will enable reduced capital and operational expenditure by the LNG industry. The current work also aided the community of scientists working to advance theoretical descriptions of fluid properties by allowing to identify deficiencies in theoretical descriptions and calculations.Keywords: LNG, thermophysical, viscosity, density, surface tension, heat capacity, bubble points, models
Procedia PDF Downloads 27434 Integrating the Modbus SCADA Communication Protocol with Elliptic Curve Cryptography
Authors: Despoina Chochtoula, Aristidis Ilias, Yannis Stamatiou
Abstract:
Modbus is a protocol that enables the communication among devices which are connected to the same network. This protocol is, often, deployed in connecting sensor and monitoring units to central supervisory servers in Supervisory Control and Data Acquisition, or SCADA, systems. These systems monitor critical infrastructures, such as factories, power generation stations, nuclear power reactors etc. in order to detect malfunctions and ignite alerts and corrective actions. However, due to their criticality, SCADA systems are vulnerable to attacks that range from simple eavesdropping on operation parameters, exchanged messages, and valuable infrastructure information to malicious modification of vital infrastructure data towards infliction of damage. Thus, the SCADA research community has been active over strengthening SCADA systems with suitable data protection mechanisms based, to a large extend, on cryptographic methods for data encryption, device authentication, and message integrity protection. However, due to the limited computation power of many SCADA sensor and embedded devices, the usual public key cryptographic methods are not appropriate due to their high computational requirements. As an alternative, Elliptic Curve Cryptography has been proposed, which requires smaller key sizes and, thus, less demanding cryptographic operations. Until now, however, no such implementation has been proposed in the SCADA literature, to the best of our knowledge. In order to fill this gap, our methodology was focused on integrating Modbus, a frequently used SCADA communication protocol, with Elliptic Curve based cryptography and develop a server/client application to demonstrate the proof of concept. For the implementation we deployed two C language libraries, which were suitably modify in order to be successfully integrated: libmodbus (https://github.com/stephane/libmodbus) and ecc-lib https://www.ceid.upatras.gr/webpages/faculty/zaro/software/ecc-lib/). The first library provides a C implementation of the Modbus/TCP protocol while the second one offers the functionality to develop cryptographic protocols based on Elliptic Curve Cryptography. These two libraries were combined, after suitable modifications and enhancements, in order to give a modified version of the Modbus/TCP protocol focusing on the security of the data exchanged among the devices and the supervisory servers. The mechanisms we implemented include key generation, key exchange/sharing, message authentication, data integrity check, and encryption/decryption of data. The key generation and key exchange protocols were implemented with the use of Elliptic Curve Cryptography primitives. The keys established by each device are saved in their local memory and are retained during the whole communication session and are used in encrypting and decrypting exchanged messages as well as certifying entities and the integrity of the messages. Finally, the modified library was compiled for the Android environment in order to run the server application as an Android app. The client program runs on a regular computer. The communication between these two entities is an example of the successful establishment of an Elliptic Curve Cryptography based, secure Modbus wireless communication session between a portable device acting as a supervisor station and a monitoring computer. Our first performance measurements are, also, very promising and demonstrate the feasibility of embedding Elliptic Curve Cryptography into SCADA systems, filling in a gap in the relevant scientific literature.Keywords: elliptic curve cryptography, ICT security, modbus protocol, SCADA, TCP/IP protocol
Procedia PDF Downloads 27033 Will My Home Remain My Castle? Tenants’ Interview Topics regarding an Eco-Friendly Refurbishment Strategy in a Neighborhood in Germany
Authors: Karin Schakib-Ekbatan, Annette Roser
Abstract:
According to the Federal Government’s plans, the German building stock should be virtually climate neutral by 2050. Thus, the “EnEff.Gebäude.2050” funding initiative was launched, complementing the projects of the Energy Transition Construction research initiative. Beyond the construction and renovation of individual buildings, solutions must be found at the neighborhood level. The subject of the presented pilot project is a building ensemble from the Wilhelminian period in Munich, which is planned to be refurbished based on a socially compatible, energy-saving, innovative-technical modernization concept. The building ensemble, with about 200 apartments, is part of the building cooperative. To create an optimized network and possible synergies between researchers and projects of the funding initiative, a Scientific Accompanying Research was established for cross-project analyses of findings and results in order to identify further research needs and trends. Thus, the project is characterized by an interdisciplinary approach that combines constructional, technical, and socio-scientific expertise based on a participatory understanding of research by involving the tenants at an early stage. The research focus is on getting insights into the tenants’ comfort requirements, attitudes, and energy-related behaviour. Both qualitative and quantitative methods are applied based on the Technology-Acceptance-Model (TAM). The core of the refurbishment strategy is a wall heating system intended to replace conventional radiators. A wall heating provides comfortable and consistent radiant heat instead of convection heat, which often causes drafts and dust turbulence. Besides comfort and health, the advantage of wall heating systems is an energy-saving operation. All apartments would be supplied by a uniform basic temperature control system (around perceived room temperature of 18 °C resp. 64,4 °F), which could be adapted to individual preferences via individual heating options (e. g. infrared heating). The new heating system would affect the furnishing of the walls, in terms of not allowing the wall surface to be covered too much with cupboards or pictures. Measurements and simulations of the energy consumption of an installed wall heating system are currently being carried out in a show apartment in this neighborhood to investigate energy-related, economical aspects as well as thermal comfort. In March, interviews were conducted with a total of 12 people in 10 households. The interviews were analyzed by MAXQDA. The main issue of the interview was the fear of reduced self-efficacy within their own walls (not having sufficient individual control over the room temperature or being very limited in furnishing). Other issues concerned the impact that the construction works might have on their daily life, such as noise or dirt. Despite their basically positive attitude towards a climate-friendly refurbishment concept, tenants were very concerned about the further development of the project and they expressed a great need for information events. The results of the interviews will be used for project-internal discussions on technical and psychological aspects of the refurbishment strategy in order to design accompanying workshops with the tenants as well as to prepare a written survey involving all households of the neighbourhood.Keywords: energy efficiency, interviews, participation, refurbishment, residential buildings
Procedia PDF Downloads 12632 Numerical Modeling of Phase Change Materials Walls under Reunion Island's Tropical Weather
Authors: Lionel Trovalet, Lisa Liu, Dimitri Bigot, Nadia Hammami, Jean-Pierre Habas, Bruno Malet-Damour
Abstract:
The MCP-iBAT1 project is carried out to study the behavior of Phase Change Materials (PCM) integrated in building envelopes in a tropical environment. Through the phase transitions (melting and freezing) of the material, thermal energy can be absorbed or released. This process enables the regulation of indoor temperatures and the improvement of thermal comfort for the occupants. Most of the commercially available PCMs are more suitable to temperate climates than to tropical climates. The case of Reunion Island is noteworthy as there are multiple micro-climates. This leads to our key question: developing one or multiple bio-based PCMs that cover the thermal needs of the different locations of the island. The present paper focuses on the numerical approach to select the PCM properties relevant to tropical areas. Numerical simulations have been carried out with two softwares: EnergyPlusTM and Isolab. The latter has been developed in the laboratory, with the implicit Finite Difference Method, in order to evaluate different physical models. Both are Thermal Dynamic Simulation (TDS) softwares that predict the building’s thermal behavior with one-dimensional heat transfers. The parameters used in this study are the construction’s characteristics (dimensions and materials) and the environment’s description (meteorological data and building surroundings). The building is modeled in accordance with the experimental setup. It is divided into two rooms, cells A and B, with same dimensions. Cell A is the reference, while in cell B, a layer of commercial PCM (Thermo Confort of MCI Technologies) has been applied to the inner surface of the North wall. Sensors are installed in each room to retrieve temperatures, heat flows, and humidity rates. The collected data are used for the comparison with the numerical results. Our strategy is to implement two similar buildings at different altitudes (Saint-Pierre: 70m and Le Tampon: 520m) to measure different temperature ranges. Therefore, we are able to collect data for various seasons during a condensed time period. The following methodology is used to validate the numerical models: calibration of the thermal and PCM models in EnergyPlusTM and Isolab based on experimental measures, then numerical testing with a sensitivity analysis of the parameters to reach the targeted indoor temperatures. The calibration relies on the past ten months’ measures (from September 2020 to June 2021), with a focus on one-week study on November (beginning of summer) when the effect of PCM on inner surface temperatures is more visible. A first simulation with the PCM model of EnergyPlus gave results approaching the measurements with a mean error of 5%. The studied property in this paper is the melting temperature of the PCM. By determining the representative temperature of winter, summer and inter-seasons with past annual’s weather data, it is possible to build a numerical model of multi-layered PCM. Hence, the combined properties of the materials will provide an optimal scenario for the application on PCM in tropical areas. Future works will focus on the development of bio-based PCMs with the selected properties followed by experimental and numerical validation of the materials. 1Materiaux ´ a Changement de Phase, une innovation pour le B ` ati TropicalKeywords: energyplus, multi-layer of PCM, phase changing materials, tropical area
Procedia PDF Downloads 9531 Understanding the Impact of Resilience Training on Cognitive Performance in Military Personnel
Authors: Haji Mohammad Zulfan Farhi Bin Haji Sulaini, Mohammad Azeezudde’en Bin Mohd Ismaon
Abstract:
The demands placed on military athletes extend beyond physical prowess to encompass cognitive resilience in high-stress environments. This study investigates the effects of resilience training on the cognitive performance of military athletes, shedding light on the potential benefits and implications for optimizing their overall readiness. In a rapidly evolving global landscape, armed forces worldwide are recognizing the importance of cognitive resilience alongside physical fitness. The study employs a mixed-methods approach, incorporating quantitative cognitive assessments and qualitative data from military athletes undergoing resilience training programs. Cognitive performance is evaluated through a battery of tests, including measures of memory, attention, decision-making, and reaction time. The participants, drawn from various branches of the military, are divided into experimental and control groups. The experimental group undergoes a comprehensive resilience training program, while the control group receives traditional physical training without a specific focus on resilience. The initial findings indicate a substantial improvement in cognitive performance among military athletes who have undergone resilience training. These improvements are particularly evident in domains such as attention and decision-making. The experimental group demonstrated enhanced situational awareness, quicker problem-solving abilities, and increased adaptability in high-stress scenarios. These results suggest that resilience training not only bolsters mental toughness but also positively impacts cognitive skills critical to military operations. In addition to quantitative assessments, qualitative data is collected through interviews and surveys to gain insights into the subjective experiences of military athletes. Preliminary analysis of these narratives reveals that participants in the resilience training program report higher levels of self-confidence, emotional regulation, and an improved ability to manage stress. These psychological attributes contribute to their enhanced cognitive performance and overall readiness. Moreover, this study explores the potential long-term benefits of resilience training. By tracking participants over an extended period, we aim to assess the durability of cognitive improvements and their effects on overall mission success. Early results suggest that resilience training may serve as a protective factor against the detrimental effects of prolonged exposure to stressors, potentially reducing the risk of burnout and psychological trauma among military athletes. This research has significant implications for military organizations seeking to optimize the performance and well-being of their personnel. The findings suggest that integrating resilience training into the training regimen of military athletes can lead to a more resilient and cognitively capable force. This, in turn, may enhance mission success, reduce the risk of injuries, and improve the overall effectiveness of military operations. In conclusion, this study provides compelling evidence that resilience training positively impacts the cognitive performance of military athletes. The preliminary results indicate improvements in attention, decision-making, and adaptability, as well as increased psychological resilience. As the study progresses and incorporates long-term follow-ups, it is expected to provide valuable insights into the enduring effects of resilience training on the cognitive readiness of military athletes, contributing to the ongoing efforts to optimize military personnel's physical and mental capabilities in the face of ever-evolving challenges.Keywords: military athletes, cognitive performance, resilience training, cognitive enhancement program
Procedia PDF Downloads 8030 Effect of Selenium Source on Meat Quality of Bonsmara Bull Calves
Authors: J. van Soest, B. Bruneel, J. Smit, N. Williams, P. Swiegers
Abstract:
Selenium (Se) is an essential trace mineral involved in reducing oxidative stress, enhancing immune status, improving reproduction, and regulating growth. During finishing period, selenium supplementation can be applied to improve meat quality. Dietary selenium can be provided in inorganic or organic forms. Specifically, L-selenomethionine (organic selenium) allows for selenium storage in animal protein which supports the animal during periods of high oxidative stress. The objective of this study was to investigate the effects of synthetically produced, single amino acid, L-selenomethionine (Excential Selenium 4000, Orffa Additives BV) on production parameters, health status, and meat quality of Bonsmara bull calves. 24 calves, 7 months of age, completed a 60-day initial growing period at a commercial feedlot, after which they were transported to research station Rumen-8 (Bethlehem, South-Africa). After a ten-day adaptation period, the bulls were allocated to a control (n=12) or treatment (n=12) group. Each group was divided over 3 pens based on weight. Both groups received Total Mixed Ration supplemented with 5.25 mg Se/head per day. The control group was supplemented with sodium selenite as Se source, whilst the treatment group was supplemented with L-selenomethionine (Excential Selenium 4000, Orffa Additives BV). Animals were limited to 10 kg feed intake per head per day to ensure similar Se intake. Treatment period lasted 1.5 months. A beta-adrenergic agonist was included in the feed for the last 30 days. During the treatment period, average daily gain, average daily feed intake, and feed conversion ratio were recorded. Blood parameters were measured at day 1, day 25, and before slaughter (day 47). After slaughter, carcass weight, dressing percentage, grading, and meat quality (pH, tenderness, colour, odour, purge, proximate analyses, acid detergent fibre, and neutral detergent fibre) were determined. No differences between groups were found in performance. A higher number of animals with cortisol levels below detection limit (27.6 nmol/l) was recorded for the treatment group. Other blood parameters showed no differences. No differences were found regarding carcass weight and dressing percentage. Important parameters of meat quality were significantly improved in the treatment group: instrumental tenderness at 14 days ageing was 2.8 and 3.4 for treatment and control respectively (P=0.010), and a 0.5% decrease in purge (of fresh samples) was shown, 1.5% and 2.0% for treatment group and control respectively (p=0.029). Besides, pH was shown to be numerically reduced in the treatment group. In summary, supplementation with L-selenomethionine as selenium source improved meat quality compared to sodium selenite. Lower instrumental tenderness (Warner Bratzler Shear Force, WBSF) was recorded for the treatment group. This indicates less tough meat and highest consumer satisfaction. Regarding purge, control was just below 2.0%, an important threshold for consumer acceptation. Treatment group scored 0.5% lower for purge than control, indicating higher consumer satisfaction. The lower pH in the treatment group could be an indication of higher glycogen reserves in muscle which could contribute to a reduced risk of Dark Firm Dry carcasses. More animals showed cortisol levels below detection limit in the treatment group, indicating lower levels of stress when animals receive L-selenomethionine.Keywords: calves, meat quality, nutrition, selenium
Procedia PDF Downloads 18129 Criminal Attitude vs Transparency in the Arab World
Authors: Keroles Akram Saed Ghatas
Abstract:
The political violence that characterized 1992 continued into 1993, creating a major security crisis for President Hosni Mubarak's government as the death toll and human rights abuses soared. Increasingly sensitive to criticism of 's human rights activities, the government established human rights departments in key ministries, beginning with the Foreign Office in February. Similar offices have been set up in the Justice and Agriculture Ministries, and plans to set up an office in the Home Office have been announced. It turned out that the main task of the law unit was to overturn the conclusions of international human rights organizations.President Mubarak was elected in a national referendum on October 4 for a third six-year term after being appointed on July 21 by the People's Assembly, an elected parliament overwhelmingly dominated by the in-power National Democratic Party will Mr. Mubarak ran unhindered. The Interior Ministry announced that nearly 16 million people cast their votes (84% of eligible voters), of which 96.28%. voted for presidential re-election.In 1993, armed Islamic extremists escalated their attacks on Christian citizens, government officials, police officers and senior security officials, resulting in casualties among the intended victims and bystanders. Sporadic attacks on buses, boats and tourist attractions also occurred throughout the year. From March 1992 to October 28, 1993, a total of 222 people lost their lives in the riots: 36 Coptic Christians and 38 other citizens; If one is a foreigner; sixty-six members of the Security Forces; and seventy-six known or suspected activists who were killed while resisting arrest. The latter was killed in airstrikes and firefights with security forces and at the site of planned attacks. On March 9-10, a series of airstrikes in Cairo, Giza, Qalyubiya province north of the capital and Aswan killed fifteen suspected militants and five members of the security forces.One of the airstrikes in Giza, part of Greater Cairo, killed the wife and son of Khalifa Mahmoud Ramadan, a suspected militant who was himself killed. The government agency Middle East News Agency reported on March 10 that the raids were part of a "broad confrontational plan aimed at ofterrorist elements"The state of emergency declared in October 1981 after the assassination of President Anwar el-Sadat was still in force in Egypt. The law, previously in effect continuously from June 1967 to May 1980, continued to grant the executive branch unique legal powers that effectively overrode the human rights guarantees of the Egyptian constitution. These provisions included wide discretionary powers in arresting and detaining individuals, as well as the ability to try civilians in military courts. The Cairo-based Independent Organization for Human Rights said so in a document sent to the United Nations in July 1993The human rights committee said the continued imposition of the state of emergency had resulted in "another constitution for the country" and "led to widespread misconduct by the security apparatus".Keywords: constitution, human rights, legal power, president, anwar, el-sadat, assassination, state of emergency, middle east, news, agency, confrontational, arresting, fugitive, leaders, terrorist, elements, armed islamic extremists.
Procedia PDF Downloads 4328 Black-Box-Optimization Approach for High Precision Multi-Axes Forward-Feed Design
Authors: Sebastian Kehne, Alexander Epple, Werner Herfs
Abstract:
A new method for optimal selection of components for multi-axes forward-feed drive systems is proposed in which the choice of motors, gear boxes and ball screw drives is optimized. Essential is here the synchronization of electrical and mechanical frequency behavior of all axes because even advanced controls (like H∞-controls) can only control a small part of the mechanical modes – namely only those of observable and controllable states whose value can be derived from the positions of extern linear length measurement systems and/or rotary encoders on the motor or gear box shafts. Further problems are the unknown processing forces like cutting forces in machine tools during normal operation which make the estimation and control via an observer even more difficult. To start with, the open source Modelica Feed Drive Library which was developed at the Laboratory for Machine Tools, and Production Engineering (WZL) is extended from one axis design to the multi axes design. It is capable to simulate the mechanical, electrical and thermal behavior of permanent magnet synchronous machines with inverters, different gear boxes and ball screw drives in a mechanical system. To keep the calculation time down analytical equations are used for field and torque producing equivalent circuit, heat dissipation and mechanical torque at the shaft. As a first step, a small machine tool with a working area of 635 x 315 x 420 mm is taken apart, and the mechanical transfer behavior is measured with an impulse hammer and acceleration sensors. With the frequency transfer functions, a mechanical finite element model is built up which is reduced with substructure coupling to a mass-damper system which models the most important modes of the axes. The model is modelled with Modelica Feed Drive Library and validated by further relative measurements between machine table and spindle holder with a piezo actor and acceleration sensors. In a next step, the choice of possible components in motor catalogues is limited by derived analytical formulas which are based on well-known metrics to gain effective power and torque of the components. The simulation in Modelica is run with different permanent magnet synchronous motors, gear boxes and ball screw drives from different suppliers. To speed up the optimization different black-box optimization methods (Surrogate-based, gradient-based and evolutionary) are tested on the case. The objective that was chosen is to minimize the integral of the deviations if a step is given on the position controls of the different axes. Small values are good measures for a high dynamic axes. In each iteration (evaluation of one set of components) the control variables are adjusted automatically to have an overshoot less than 1%. It is obtained that the order of the components in optimization problem has a deep impact on the speed of the black-box optimization. An approach to do efficient black-box optimization for multi-axes design is presented in the last part. The authors would like to thank the German Research Foundation DFG for financial support of the project “Optimierung des mechatronischen Entwurfs von mehrachsigen Antriebssystemen (HE 5386/14-1 | 6954/4-1)” (English: Optimization of the Mechatronic Design of Multi-Axes Drive Systems).Keywords: ball screw drive design, discrete optimization, forward feed drives, gear box design, linear drives, machine tools, motor design, multi-axes design
Procedia PDF Downloads 28627 A Compact Standing-Wave Thermoacoustic Refrigerator Driven by a Rotary Drive Mechanism
Authors: Kareem Abdelwahed, Ahmed Salama, Ahmed Rabie, Ahmed Hamdy, Waleed Abdelfattah, Ahmed Abd El-Rahman
Abstract:
Conventional vapor-compression refrigeration systems rely on typical refrigerants, such as CFC, HCFC and ammonia. Despite of their suitable thermodynamic properties and their stability in the atmosphere, their corresponding global warming potential and ozone depletion potential raise concerns about their usage. Thus, the need for new refrigeration systems, which are environment-friendly, inexpensive and simple in construction, has strongly motivated the development of thermoacoustic energy conversion systems. A thermoacoustic refrigerator (TAR) is a device that is mainly consisting of a resonator, a stack and two heat exchangers. Typically, the resonator is a long circular tube, made of copper or steel and filled with Helium as a the working gas, while the stack has short and relatively low thermal conductivity ceramic parallel plates aligned with the direction of the prevailing resonant wave. Typically, the resonator of a standing-wave refrigerator has one end closed and is bounded by the acoustic driver at the other end enabling the propagation of half-wavelength acoustic excitation. The hot and cold heat exchangers are made of copper to allow for efficient heat transfer between the working gas and the external heat source and sink respectively. TARs are interesting because they have no moving parts, unlike conventional refrigerators, and almost no environmental impact exists as they rely on the conversion of acoustic and heat energies. Their fabrication process is rather simpler and sizes span wide variety of length scales. The viscous and thermal interactions between the stack plates, heat exchangers' plates and the working gas significantly affect the flow field within the plates' channels, and the energy flux density at the plates' surfaces, respectively. Here, the design, the manufacture and the testing of a compact refrigeration system that is based on the thermoacoustic energy-conversion technology is reported. A 1-D linear acoustic model is carefully and specifically developed, which is followed by building the hardware and testing procedures. The system consists of two harmonically-oscillating pistons driven by a simple 1-HP rotary drive mechanism operating at a frequency of 42Hz -hereby, replacing typical expensive linear motors and loudspeakers-, and a thermoacoustic stack within which the energy conversion of sound into heat is taken place. Air at ambient conditions is used as the working gas while the amplitude of the driver's displacement reaches 19 mm. The 30-cm-long stack is a simple porous ceramic material having 100 square channels per square inch. During operation, both oscillating-gas pressure and solid-stack temperature are recorded for further analysis. Measurements show a maximum temperature difference of about 27 degrees between the stack hot and cold ends with a Carnot coefficient of performance of 11 and estimated cooling capacity of five Watts, when operating at ambient conditions. A dynamic pressure of 7-kPa-amplitude is recorded, yielding a drive ratio of 7% approximately, and found in a good agreement with theoretical prediction. The system behavior is clearly non-linear and significant non-linear loss mechanisms are evident. This work helps understanding the operation principles of thermoacoustic refrigerators and presents a keystone towards developing commercial thermoacoustic refrigerator units.Keywords: refrigeration system, rotary drive mechanism, standing-wave, thermoacoustic refrigerator
Procedia PDF Downloads 36826 Elevated Systemic Oxidative-Nitrosative Stress and Cerebrovascular Function in Professional Rugby Union Players: The Link to Impaired Cognition
Authors: Tom S. Owens, Tom A. Calverley, Benjamin S. Stacey, Christopher J. Marley, George Rose, Lewis Fall, Gareth L. Jones, Priscilla Williams, John P. R. Williams, Martin Steggall, Damian M. Bailey
Abstract:
Introduction and aims: Sports-related concussion (SRC) represents a significant and growing public health concern in rugby union, yet remains one of the least understood injuries facing the health community today. Alongside increasing SRC incidence rates, there is concern that prior recurrent concussion may contribute to long-term neurologic sequelae in later-life. This may be due to an accelerated decline in cerebral perfusion, a major risk factor for neurocognitive decline and neurodegeneration, though the underlying mechanisms remain to be established. The present study hypothesised that recurrent concussion in current professional rugby union players would result in elevated systemic oxidative-nitrosative stress, reflected by a free radical-mediated reduction in nitric oxide (NO) bioavailability and impaired cerebrovascular and cognitive function. Methodology: A longitudinal study design was adopted across the 2017-2018 rugby union season. Ethical approval was obtained from the University of South Wales Ethics Committee. Data collection is ongoing, and therefore the current report documents result from the pre-season and first half of the in-season data collection. Participants were initially divided into two subgroups; 23 professional rugby union players (aged 26 ± 5 years) and 22 non-concussed controls (27 ± 8 years). Pre-season measurements were performed for cerebrovascular function (Doppler ultrasound of middle cerebral artery velocity (MCAv) in response to hypocapnia/normocapnia/hypercapnia), cephalic venous concentrations of the ascorbate radical (A•-, electron paramagnetic resonance spectroscopy), NO (ozone-based chemiluminescence) and cognition (neuropsychometric tests). Notational analysis was performed to assess contact in the rugby group throughout each competitive game. Results: 1001 tackles and 62 injuries, including three concussions were observed across the first half of the season. However, no associations were apparent between number of tackles and any injury type (P > 0.05). The rugby group expressed greater oxidative stress as indicated by increased A•- (P < 0.05 vs. control) and a subsequent decrease in NO bioavailability (P < 0.05 vs. control). The rugby group performed worse in the Ray Auditory Verbal Learning Test B (RAVLT-B, learning, and memory) and the Grooved Pegboard test using both the dominant and non-dominant hands (visuomotor coordination, P < 0.05 vs. control). There were no between-group differences in cerebral perfusion at baseline (MCAv: 54 ± 13 vs. 59 ± 12, P > 0.05). Likewise, no between-group differences in CVRCO2Hypo (2.58 ± 1.01 vs. 2.58 ± 0.75, P > 0.05) or CVRCO2Hyper (2.69 ± 1.07 vs. 3.35 ± 1.28, P > 0.05) were observed. Conclusion: The present study identified that the rugby union players are characterized by impaired cognitive function subsequent to elevated systemic-oxidative-nitrosative stress. However, this appears to be independent of any functional impairment in cerebrovascular function. Given the potential long-term trajectory towards accelerated cognitive decline in populations exposed to SRC, prophylaxis to increase NO bioavailability warrants consideration.Keywords: cognition, concussion, mild traumatic brain injury, rugby
Procedia PDF Downloads 17625 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation
Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong
Abstract:
Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation
Procedia PDF Downloads 19024 Female Masochism, Jouissance, and (Re)workings of Trauma: An Ethnographic Study of the Bondage, Discipline, Dominance, Submission, Sadism, and Masochism Scene in Post-WWII Japan
Authors: Maari Sugawara
Abstract:
This ethnographic research interrogates female masochism within contemporary Japan, focusing on fifteen female BDSM (Bondage, Discipline, Dominance, Submission, Sadism, and Masochism) practitioners who identify as masochists, bottoms, and/or submissives. The study employs semi-structured interviews with these practitioners, representing diverse backgrounds and ages, to explore the intersection of sexuality and individual and/or collective trauma. The study focuses on a specific group of sadomasochists who, as survivors of gender and sexual violence, reenact their trauma through BDSM practices. This exploration draws on feminist performance studies, postcolonial studies, psychoanalysis, and affect analysis to highlight the complexities of female masochism. In a cultural milieu that often reduces female masochism to mere compliance with heteropatriarchy, this study argues that specific masochistic practices transcend submission, serving as vital strategies for confronting trauma and dismantling entrenched cultural narratives. Engaging with Lacan’s concept of feminine jouissance and the notion of "creative masochism" in the context of Japan's proximity to the imperial US, the study facilitates a nuanced exploration of female masochistic enjoyment. The study shows that these practices can act as both a means of survival and a mode of resilience, challenging dominant narratives that portray masochism solely as a form of subjugation, drawing on feminist performance studies, postcolonial studies, psychoanalysis, and affect analysis. It interprets masochism as a complex terrain of affective engagement, where shared suffering and consensual pain foster transformative possibilities. By analyzing BDSM as a cultural site, this research reframes masochism not only as a personal negotiation of pain but also as a broader allegory for Japan’s ongoing geopolitical self-positioning. Central to this analysis is the concept of "creative masochism," which positions masochism as both a metaphor and a practice through which Japan addresses its historical subordination to the United States. This framework allows for a deeper understanding of how participants' lived desires intersect with national narratives, illuminating the relationship between personal experiences and larger socio-political dynamics. It incorporates sadomasochistic metaphors into Japan-U.S. interactions, reflecting underlying patterns of submission, resistance, and cultural negotiation. Additionally, this research examines the effects, affects, and limitations of masochism within the post-WWII Japanese context, providing insights into how masochism can reshape one's relationship with their surroundings. This study challenges the notion that female masochism is entirely subsumed by hegemonic structures, revealing instead that subjects can assert their autonomy within their experiences of pleasure and pain. The consensual enactment of violence within these encounters emerges as a complex and ambivalent process, wherein pain transforms into a generative force for reimagining alternative forms of sociality and belonging. Additionally, the research identifies contradictions and connections between the personal and political, examining how kink practices shape participants' daily lives and identities, and vice versa, highlighting the profound impact of these practices on their sense of self and community. Ultimately, it reaffirms agency in the face of pervasive heteronormative power dynamics, suggesting that masochism can serve as a site of both resistance and redefinition.Keywords: female masochism, BDSM, Japan, masochism, trauma, sexual violence
Procedia PDF Downloads 2023 Evaluation of the Incorporation of Modified Starch in Puff Pastry Dough by Mixolab Rheological Analysis
Authors: Alejandra Castillo-Arias, Carlos A. Fuenmayor, Carlos M. Zuluaga-Domínguez
Abstract:
The connection between health and nutrition has driven the food industry to explore healthier and more sustainable alternatives. Key strategies to enhance nutritional quality and extend shelf life include reducing saturated fats and incorporating natural ingredients. One area of focus is the use of modified starch in baked goods, which has attracted significant interest in food science and industry due to its functional benefits. Modified starches are commonly used for their gelling, thickening, and water-retention properties. Derived from sources like waxy corn, potatoes, tapioca, or rice, these polysaccharides improve thermal stability and resistance to dough. The use of modified starch enhances the texture and structure of baked goods, which is crucial for consumer acceptance. In this study, it was evaluated the effects of modified starch inclusion on dough used for puff pastry elaboration, measured with Mixolab analysis. This technique assesses flour quality by examining its behavior under varying conditions, providing a comprehensive profile of its baking properties. The analysis included measurements of water absorption capacity, dough development time, dough stability, softening, final consistency, and starch gelatinization. Each of these parameters offers insights into how the flour will perform during baking and the quality of the final product. The performance of wheat flour with varying levels of modified starch inclusion (10%, 20%, 30%, and 40%) was evaluated through Mixolab analysis, with a control sample consisting of 100% wheat flour. Water absorption, gluten content, and retrogradation indices were analyzed to understand how modified starch affects dough properties. The results showed that the inclusion of modified starch increased the absorption index, especially at levels above 30%, indicating a dough with better handling qualities and potentially improved texture in the final baked product. However, the reduction in wheat flour resulted in a lower kneading index, affecting dough strength. Conversely, incorporating more than 20% modified starch reduced the retrogradation index, indicating improved stability and resistance to crystallization after cooling. Additionally, the modified starch improved the gluten index, contributing to better dough elasticity and stability, providing good structural support and resistance to deformation during mixing and baking. As expected, the control sample exhibited a higher amylase index, due to the presence of enzymes in wheat flour. However, this is of low concern in puff pastry dough, as amylase activity is more relevant in fermented doughs, which is not the case here. Overall, the use of modified starch in puff pastry enhanced product quality by improving texture, structure, and shelf life, particularly when used at levels between 30% and 40%. This research underscores the potential of modified starches to address health concerns associated with traditional starches and to contribute to the development of higher-quality, consumer-friendly baked products. Furthermore, the findings suggest that modified starches could play a pivotal role in future innovations within the baking industry, particularly in products aiming to balance healthfulness with sensory appeal. By incorporating modified starch into their formulations, bakeries can meet the growing demand for healthier, more sustainable products while maintaining the indulgent qualities that consumers expect from baked goods.Keywords: baking quality, dough properties, modified starch, puff pastry
Procedia PDF Downloads 2222 Linking the Genetic Signature of Free-Living Soil Diazotrophs with Process Rates under Land Use Conversion in the Amazon Rainforest
Authors: Rachel Danielson, Brendan Bohannan, S.M. Tsai, Kyle Meyer, Jorge L.M. Rodrigues
Abstract:
The Amazon Rainforest is a global diversity hotspot and crucial carbon sink, but approximately 20% of its total extent has been deforested- primarily for the establishment of cattle pasture. Understanding the impact of this large-scale disturbance on soil microbial community composition and activity is crucial in understanding potentially consequential shifts in nutrient or greenhouse gas cycling, as well as adding to the body of knowledge concerning how these complex communities respond to human disturbance. In this study, surface soils (0-10cm) were collected from three forests and three 45-year-old pastures in Rondonia, Brazil (the Amazon state with the greatest rate of forest destruction) in order to determine the impact of forest conversion on microbial communities involved in nitrogen fixation. Soil chemical and physical parameters were paired with measurements of microbial activity and genetic profiles to determine how community composition and process rates relate to environmental conditions. Measuring both the natural abundance of 15N in total soil N, as well as incorporation of enriched 15N2 under incubation has revealed that conversion of primary forest to cattle pasture results in a significant increase in the rate of nitrogen fixation by free-living diazotrophs. Quantification of nifH gene copy numbers (an essential subunit encoding the nitrogenase enzyme) correspondingly reveals a significant increase of genes in pasture compared to forest soils. Additionally, genetic sequencing of both nifH genes and transcripts shows a significant increase in the diversity of the present and metabolically active diazotrophs within the soil community. Levels of both organic and inorganic nitrogen tend to be lower in pastures compared to forests, with ammonium rather than nitrate as the dominant inorganic form. However, no significant or consistent differences in total, extractable, permanganate-oxidizable, or loss-on-ignition carbon are present between the two land-use types. Forest conversion is associated with a 0.5- 1.0 unit pH increase, but concentrations of many biologically relevant nutrients such as phosphorus do not increase consistently. Increases in free-living diazotrophic community abundance and activity appear to be related to shifts in carbon to nitrogen pool ratios. Furthermore, there may be an important impact of transient, low molecular weight plant-root-derived organic carbon on free-living diazotroph communities not captured in this study. Preliminary analysis of nitrogenase gene variant composition using NovoSeq metagenomic sequencing indicates that conversion of forest to pasture may significantly enrich vanadium-based nitrogenases. This indication is complemented by a significant decrease in available soil molybdenum. Very little is known about the ecology of diazotrophs utilizing vanadium-based nitrogenases, so further analysis may reveal important environmental conditions favoring their abundance and diversity in soil systems. Taken together, the results of this study indicate a significant change in nitrogen cycling and diazotroph community composition with the conversion of the Amazon Rainforest. This may have important implications for the sustainability of cattle pastures once established since nitrogen is a crucial nutrient for forage grass productivity.Keywords: free-living diazotrophs, land use change, metagenomic sequencing, nitrogen fixation
Procedia PDF Downloads 19421 Application of Large Eddy Simulation-Immersed Boundary Volume Penalization Method for Heat and Mass Transfer in Granular Layers
Authors: Artur Tyliszczak, Ewa Szymanek, Maciej Marek
Abstract:
Flow through granular materials is important to a vast array of industries, for instance in construction industry where granular layers are used for bulkheads and isolators, in chemical engineering and catalytic reactors where large surfaces of packed granular beds intensify chemical reactions, or in energy production systems, where granulates are promising materials for heat storage and heat transfer media. Despite the common usage of granulates and extensive research performed in this field, phenomena occurring between granular solid elements or between solids and fluid are still not fully understood. In the present work we analyze the heat exchange process between the flowing medium (gas, liquid) and solid material inside the granular layers. We consider them as a composite of isolated solid elements and inter-granular spaces in which a gas or liquid can flow. The structure of the layer is controlled by shapes of particular granular elements (e.g., spheres, cylinders, cubes, Raschig rings), its spatial distribution or effective characteristic dimension (total volume or surface area). We will analyze to what extent alteration of these parameters influences on flow characteristics (turbulent intensity, mixing efficiency, heat transfer) inside the layer and behind it. Analysis of flow inside granular layers is very complicated because the use of classical experimental techniques (LDA, PIV, fibber probes) inside the layers is practically impossible, whereas the use of probes (e.g. thermocouples, Pitot tubes) requires drilling of holes inside the solid material. Hence, measurements of the flow inside granular layers are usually performed using for instance advanced X-ray tomography. In this respect, theoretical or numerical analyses of flow inside granulates seem crucial. Application of discrete element methods in combination with the classical finite volume/finite difference approaches is problematic as a mesh generation process for complex granular material can be very arduous. A good alternative for simulation of flow in complex domains is an immersed boundary-volume penalization (IB-VP) in which the computational meshes have simple Cartesian structure and impact of solid objects on the fluid is mimicked by source terms added to the Navier-Stokes and energy equations. The present paper focuses on application of the IB-VP method combined with large eddy simulation (LES). The flow solver used in this work is a high-order code (SAILOR), which was used previously in various studies, including laminar/turbulent transition in free flows and also for flows in wavy channels, wavy pipes and over various shape obstacles. In these cases a formal order of approximation turned out to be in between 1 and 2, depending on the test case. The current research concentrates on analyses of the flows in dense granular layers with elements distributed in a deterministic regular manner and validation of the results obtained using LES-IB method and body-fitted approach. The comparisons are very promising and show very good agreement. It is found that the size, number of elements and their distribution have huge impact on the obtained results. Ordering of the granular elements (or lack of it) affects both the pressure drop and efficiency of the heat transfer as it significantly changes mixing process.Keywords: granular layers, heat transfer, immersed boundary method, numerical simulations
Procedia PDF Downloads 136