Search results for: memory recovery
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2831

Search results for: memory recovery

371 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0

Authors: Harris Niavis, Dimitra Politaki

Abstract:

The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.

Keywords: blockchain, data quality, industry4.0, product quality

Procedia PDF Downloads 166
370 Inhibition Theory: The Development of Subjective Happiness and Life Satisfaction after Experiencing Severe, Traumatic Life Events (Paraplegia)

Authors: Tanja Ecken, Laura Fricke, Anika Steger, Maren M. Michaelsen, Tobias Esch

Abstract:

Studies and applied experiences evidence severe and traumatic accidents to not only require physical rehabilitation and recovery but also to necessitate a psychological adaption and reorganization to the changed living conditions. Neurobiological models underpinning the experience of happiness and satisfaction postulate life shocks to potentially enhance the experience of happiness and life satisfaction, i.e., posttraumatic growth (PTG). This present study aims to provide an in-depth understanding of the underlying psychological processes of PTG and to outline its consequences on subjective happiness and life satisfaction. To explore the aforementioned, Esch’s (2022) ABC Model was used as guidance for the development of a questionnaire assessing changes in happiness and life satisfaction and for a schematic model postulating the development of PTG in the context of paraplegia. Two-stage qualitative interview procedures explored participants’ experiences of paraplegia. Specifically, narrative, semi-structured interviews (N=28) focused on the time before and after the accident, the availability of supportive resources, and potential changes in the perception of happiness and life satisfaction. Qualitative analysis (Grounded Theory) indicated an initial phase of reorganization was followed by a gradual psychological adaption to novel, albeit reduced, opportunities in life. Participants reportedly experienced a ‘compelled’ slowing down and elements of mindfulness, subsequently instilling a sense of gratitude and joy in relation to life’s presumed trivialities. Despite physical limitations and difficulties, participants reported an enhanced ability to relate to oneself and others and a reduction of perceived every day nuisances. Concluding, PTG can be experienced in response to severe, traumatic life events and has the potential to enrich the lives of affected persons in numerous, unexpected and yet challenging ways. PTG appears to be spectrum comprised of an interplay of internal and external resources underpinned by neurobiological processes. Participants experienced PTG irrelevant of age, gender, marital status, income or level of education.

Keywords: inhibition theory, posttraumatic growth, trauma, stress, life satisfaction, subjective happiness, traumatic life events, paraplegia

Procedia PDF Downloads 63
369 The Negative Implications of Childhood Obesity and Malnutrition on Cognitive Development

Authors: Stephanie Remedios, Linda Veronica Rios

Abstract:

Background. Pediatric obesity is a serious health problem linked to multiple physical diseases and ailments, including diabetes, heart disease, and joint issues. While research has shown pediatric obesity can bring about an array of physical illnesses, it is less known how such a condition can affect children’s cognitive development. With childhood overweight and obesity prevalence rates on the rise, it is essential to understand the scope of their cognitive consequences. The present review of the literature tested the hypothesis that poor physical health, such as childhood obesity or malnutrition, negatively impacts a child’s cognitive development. Methodology. A systematic review was conducted to determine the relationship between poor physical health and lower cognitive functioning in children ages 4-16. Electronic databases were searched for studies dating back to ten years. The following databases were used: Science Direct, FIU Libraries, and Google Scholar. Inclusion criteria consisted of peer-reviewed academic articles written in English from 2012 to 2022 that analyzed the relationship between childhood malnutrition and obesity on cognitive development. A total of 17,000 articles were obtained, of which 16,987 were excluded for not addressing the cognitive implications exclusively. Of the acquired articles, 13 were retained. Results. Research suggested a significant connection between diet and cognitive development. Both diet and physical activity are strongly correlated with higher cognitive functioning. Cognitive domains explored in this work included learning, memory, attention, inhibition, and impulsivity. IQ scores were also considered objective representations of overall cognitive performance. Studies showed physical activity benefits cognitive development, primarily for executive functioning and language development. Additionally, children suffering from pediatric obesity or malnutrition were found to score 3-10 points lower in IQ scores when compared to healthy, same-aged children. Conclusion. This review provides evidence that the presence of physical activity and overall physical health, including appropriate diet and nutritional intake, has beneficial effects on cognitive outcomes. The primary conclusion from this research is that childhood obesity and malnutrition show detrimental effects on cognitive development in children, primarily with learning outcomes. Assuming childhood obesity and malnutrition rates continue their current trade, it is essential to understand the complete physical and psychological implications of obesity and malnutrition in pediatric populations. Given the limitations encountered through our research, further studies are needed to evaluate the areas of cognition affected during childhood.

Keywords: childhood malnutrition, childhood obesity, cognitive development, cognitive functioning

Procedia PDF Downloads 104
368 Comparing Deep Architectures for Selecting Optimal Machine Translation

Authors: Despoina Mouratidis, Katia Lida Kermanidis

Abstract:

Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.

Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification

Procedia PDF Downloads 113
367 Transgenerational Impact of Intrauterine Hyperglycaemia to F2 Offspring without Pre-Diabetic Exposure on F1 Male Offspring

Authors: Jun Ren, Zhen-Hua Ming, He-Feng Huang, Jian-Zhong Sheng

Abstract:

Adverse intrauterine stimulus during critical or sensitive periods in early life, may lead to health risk not only in later life span, but also further generations. Intrauterine hyperglycaemia, as a major feature of gestational diabetes mellitus (GDM), is a typical adverse environment for both F1 fetus and F1 gamete cells development. However, there is scare information of phenotypic difference of metabolic memory between somatic cells and germ cells exposed by intrauterine hyperglycaemia. The direct transmission effect of intrauterine hyperglycaemia per se has not been assessed either. In this study, we built a GDM mice model and selected male GDM offspring without pre-diabetic phenotype as our founders, to exclude postnatal diabetic influence on gametes, thereby investigate the direct transmission effect of intrauterine hyperglycaemia exposure on F2 offspring, and we further compared the metabolic difference of affected F1-GDM male offspring and F2 offspring. A GDM mouse model of intrauterine hyperglycemia was established by intraperitoneal injection of streptozotocin after pregnancy. Pups of GDM mother were fostered by normal control mothers. All the mice were fed with standard food. Male GDM offspring without metabolic dysfunction phenotype were crossed with normal female mice to obtain F2 offspring. Body weight, glucose tolerance test, insulin tolerance test and homeostasis model of insulin resistance (HOMA-IR) index were measured in both generations at 8 week of age. Some of F1-GDM male mice showed impaired glucose tolerance (p < 0.001), none of F1-GDM male mice showed impaired insulin sensitivity. Body weight of F1-GDM mice showed no significance with control mice. Some of F2-GDM offspring exhibited impaired glucose tolerance (p < 0.001), all the F2-GDM offspring exhibited higher HOMA-IR index (p < 0.01 of normal glucose tolerance individuals vs. control, p < 0.05 of glucose intolerance individuals vs. control). All the F2-GDM offspring exhibited higher ITT curve than control (p < 0.001 of normal glucose tolerance individuals, p < 0.05 of glucose intolerance individuals, vs. control). F2-GDM offspring had higher body weight than control mice (p < 0.001 of normal glucose tolerance individuals, p < 0.001 of glucose intolerance individuals, vs. control). While glucose intolerance is the only phenotype that F1-GDM male mice may exhibit, F2 male generation of healthy F1-GDM father showed insulin resistance, increased body weight and/or impaired glucose tolerance. These findings imply that intrauterine hyperglycaemia exposure affects germ cells and somatic cells differently, thus F1 and F2 offspring demonstrated distinct metabolic dysfunction phenotypes. And intrauterine hyperglycaemia exposure per se has a strong influence on F2 generation, independent of postnatal metabolic dysfunction exposure.

Keywords: inheritance, insulin resistance, intrauterine hyperglycaemia, offspring

Procedia PDF Downloads 228
366 Cleaning of Polycyclic Aromatic Hydrocarbons (PAH) Obtained from Ferroalloys Plant

Authors: Stefan Andersson, Balram Panjwani, Bernd Wittgens, Jan Erik Olsen

Abstract:

Polycyclic Aromatic hydrocarbons are organic compounds consisting of only hydrogen and carbon aromatic rings. PAH are neutral, non-polar molecules that are produced due to incomplete combustion of organic matter. These compounds are carcinogenic and interact with biological nucleophiles to inhibit the normal metabolic functions of the cells. Norways, the most important sources of PAH pollution is considered to be aluminum plants, the metallurgical industry, offshore oil activity, transport, and wood burning. Stricter governmental regulations regarding emissions to the outer and internal environment combined with increased awareness of the potential health effects have motivated Norwegian metal industries to increase their efforts to reduce emissions considerably. One of the objective of the ongoing industry and Norwegian research council supported "SCORE" project is to reduce potential PAH emissions from an off gas stream of a ferroalloy furnace through controlled combustion. In a dedicated combustion chamber. The sizing and configuration of the combustion chamber depends on the combined properties of the bulk gas stream and the properties of the PAH itself. In order to achieve efficient and complete combustion the residence time and minimum temperature need to be optimized. For this design approach reliable kinetic data of the individual PAH-species and/or groups thereof are necessary. However, kinetic data on the combustion of PAH are difficult to obtain and there is only a limited number of studies. The paper presents an evaluation of the kinetic data for some of the PAH obtained from literature. In the present study, the oxidation is modelled for pure PAH and also for PAH mixed with process gas. Using a perfectly stirred reactor modelling approach the oxidation is modelled including advanced reaction kinetics to study influence of residence time and temperature on the conversion of PAH to CO2 and water. A Chemical Reactor Network (CRN) approach is developed to understand the oxidation of PAH inside the combustion chamber. Chemical reactor network modeling has been found to be a valuable tool in the evaluation of oxidation behavior of PAH under various conditions.

Keywords: PAH, PSR, energy recovery, ferro alloy furnace

Procedia PDF Downloads 257
365 Model Organic Ranikin Cycle Power Plant for Waste Heat Recovery in Olkaria-I Geothermal Power Plant

Authors: Haile Araya Nigusse, Hiram M. Ndiritu, Robert Kiplimo

Abstract:

Energy consumption is an indispensable component for the continued development of the human population. The global energy demand increases with development and population rise. The increase in energy demand, high cost of fossil fuels and the link between energy utilization and environmental impacts have resulted in the need for a sustainable approach to the utilization of the low grade energy resources. The Organic Rankine Cycle (ORC) power plant is an advantageous technology that can be applied in generation of power from low temperature brine of geothermal reservoirs. The power plant utilizes a low boiling organic working fluid such as a refrigerant or a hydrocarbon. Researches indicated that the performance of ORC power plant is highly dependent upon factors such as proper organic working fluid selection, types of heat exchangers (condenser and evaporator) and turbine used. Despite a high pressure drop, shell-tube heat exchangers have satisfactory performance for ORC power plants. This study involved the design, fabrication and performance assessment of the components of a model Organic Rankine Cycle power plant to utilize the low grade geothermal brine. Two shell and tube heat exchangers (evaporator and condenser) and a single stage impulse turbine have been designed, fabricated and the performance assessment of each component has been conducted. Pentane was used as a working fluid and hot water simulating the geothermal brine. The results of the experiment indicated that the increase in mass flow rate of hot water by 0.08 kg/s caused a rise in overall heat transfer coefficient of the evaporator by 17.33% and the heat transferred was increased by 6.74%. In the condenser, the increase of cooling water flow rate from 0.15 kg/s to 0.35 kg/s increased the overall heat transfer coefficient by 1.21% and heat transferred was increased by 4.26%. The shaft speed varied from 1585 to 4590 rpm as inlet pressure was varied from 0.5 to 5.0 bar and power generated was varying from 4.34 to 14.46W. The results of the experiments indicated that the performance of each component of the model Organic Rankine Cycle power plant operating at low temperature heat resources was satisfactory.

Keywords: brine, heat exchanger, ORC, turbine

Procedia PDF Downloads 633
364 Identifying Critical Links of a Transport Network When Affected by a Climatological Hazard

Authors: Beatriz Martinez-Pastor, Maria Nogal, Alan O'Connor

Abstract:

During the last years, the number of extreme weather events has increased. A variety of extreme weather events, including river floods, rain-induced landslides, droughts, winter storms, wildfire, and hurricanes, have threatened and damaged many different regions worldwide. These events have a devastating impact on critical infrastructure systems resulting in high social, economical and environmental costs. These events have a huge impact in transport systems. Since, transport networks are completely exposed to every kind of climatological perturbations, and its performance is closely related with these events. When a traffic network is affected by a climatological hazard, the quality of its service is threatened, and the level of the traffic conditions usually decreases. With the aim of understanding this process, the concept of resilience has become most popular in the area of transport. Transport resilience analyses the behavior of a traffic network when a perturbation takes place. This holistic concept studies the complete process, from the beginning of the perturbation until the total recovery of the system, when the perturbation has finished. Many concepts are included in the definition of resilience, such as vulnerability, redundancy, adaptability, and safety. Once the resilience of a transport network can be evaluated, in this case, the methodology used is a dynamic equilibrium-restricted assignment model that allows the quantification of the concept, the next step is its improvement. Through the improvement of this concept, it will be possible to create transport networks that are able to withstand and have a better performance under the presence of climatological hazards. Analyzing the impact of a perturbation in a traffic network, it is observed that the response of the different links, which are part of the network, can be completely different from one to another. Consequently and due to this effect, many questions arise, as what makes a link more critical before an extreme weather event? or how is it possible to identify these critical links? With this aim, and knowing that most of the times the owners or managers of the transport systems have limited resources, the identification of the critical links of a transport network before extreme weather events, becomes a crucial objective. For that reason, using the available resources in the areas that will generate a higher improvement of the resilience, will contribute to the global development of the network. Therefore, this paper wants to analyze what kind of characteristic makes a link a critical one when an extreme weather event damages a transport network and finally identify them.

Keywords: critical links, extreme weather events, hazard, resilience, transport network

Procedia PDF Downloads 270
363 Selection and Identification of Some Spontaneous Plant Species Having the Ability to Grow Naturally on Crude Oil Contaminated Soil for a Possible Approach to Decontaminate and Rehabilitate an Industrial Area

Authors: Salima Agoun-Bahar, Ouzna Abrous-Belbachir, Souad Amelal

Abstract:

Industrial areas generally contain heavy metals; thus, negative consequences can appear in the medium and long term on the fauna and flora, but also on the food chain, which man constitutes the final link. The SONATRACH Company has become aware of the importance of environmental protection by setting up a rehabilitation program for polluted sites in order to avoid major ecological disasters and find both curative and preventive solutions. The aim of this work consists to study industrial pollution located around a crude oil storage tank in the Algiers refinery of Sidi R'cine and to select the plants which accumulate the most heavy metals for possible use in phytotechnology. Sampling of whole plants with their soil clod was realized around the pollution source at a depth of twenty centimeters, then transported to the laboratory to identify them. The quantification of heavy metals, lead, zinc, copper, and nickel was carried out by atomic absorption spectrophotometry with flame in the soil and at the level of the aerial and underground parts of the plants. Ten plant species were recorded in the polluted site, three of them belonging to the grass family with a dominance percentage higher than 50%, followed by three other species belonging to the Composite family represented by 12% and one species for each of the families Linaceae, Plantaginaceae, Papilionaceae, and Boraginaceae. Koeleria phleoïdes L. and Avena sterilis L. of the grass family seem to be the dominant plants, although they are quite far from the pollution source. Lead pollution of soils is the most pronounced for all stations, with values varying from 237.5 to 2682.5 µg.g⁻¹. Other peaks are observed for zinc (1177 µg.g⁻¹) and copper (635 µg.g⁻¹) at station 8 and nickel (1800 µg.g⁻¹) at station 10. Among the inventoried plants, some species accumulate a significant amount of metals: Trifolium sp and K.phleoides for lead and zinc, P.lanceolata and G.tomentosa for nickel, and A.clavatus for zinc. K.phloides is a very interesting species because it accumulates an important quantity of heavy metals, especially in its aerial part. This can be explained by its use of the phytoextraction technique, which will facilitate the recovery of the pollutants by the simple removal of shoots.

Keywords: heavy metals, industrial pollution, phytotechnology, rehabilitation

Procedia PDF Downloads 48
362 Revisiting Hospital Ward Design Basics for Sustainable Family Integration

Authors: Ibrahim Abubakar Alkali, Abubakar Sarkile Kawuwa, Ibrahim Sani Khalil

Abstract:

The concept of space and function forms the bedrock for spatial configuration in architectural design. Thus, the effectiveness and functionality of an architectural product depends their cordial relationship. This applies to all buildings especially to a hospital ward setting designed to accommodate various complex and diverse functions. Health care facilities design, especially an inpatient setting, is governed by many regulations and technical requirements. It is also affected by many less defined needs, particularly, response to culture and the need to provide for patient families’ presence and participation. The spatial configuration of the hospital ward setting in developing countries has no consideration for the patient’s families despite the significant role they play in promoting recovery. Attempts to integrate facilities for patients’ families have always been challenging, especially in developing countries like Nigeria, where accommodation for inpatients is predominantly in an open ward system. In addition, the situation is compounded by culture, which significantly dictates healthcare practices in Africa. Therefore, achieving such a hospital ward setting that is patient and family-centered requires careful assessment of family care actions and transaction spaces so as to arrive at an evidence based solution. Therefore, the aim of this study is to identify how hospital ward spaces can be reconfigured to provide for sustainable family integration. In achieving this aim, a qualitative approach using the principles of behavioral mapping was employed in male and female medical wards of the Federal Teaching Hospital (FTH) Gombe, Nigeria. The data obtained was analysed using classical and comparative content analysis. Patients’ families have been found to be a critical component of hospital ward design that cannot be undermined. Accordingly, bedsides, open yards, corridors and foyers have been identified as patient families’ transaction spaces that require design attention. Arriving at sustainable family integration can be achieved by revisiting the design requirements of the family transaction spaces based on the findings in order to avoid the rowdiness of the wards and uncoordinated sprawl.

Keywords: caregiving, design basics, family integration, hospital ward, sustainability

Procedia PDF Downloads 286
361 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data

Authors: S. Jurado, E. Pazmino

Abstract:

Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.

Keywords: medial axis, pore-throat distribution, porosity, porous media

Procedia PDF Downloads 100
360 Structural Analysis and Modelling in an Evolving Iron Ore Operation

Authors: Sameh Shahin, Nannang Arrys

Abstract:

Optimizing pit slope stability and reducing strip ratio of a mining operation are two key tasks in geotechnical engineering. With a growing demand for minerals and an increasing cost associated with extraction, companies are constantly re-evaluating the viability of mineral deposits and challenging their geological understanding. Within Rio Tinto Iron Ore, the Structural Geology (SG) team investigate and collect critical data, such as point based orientations, mapping and geological inferences from adjacent pits to re-model deposits where previous interpretations have failed to account for structurally controlled slope failures. Utilizing innovative data collection methods and data-driven investigation, SG aims to address the root causes of slope instability. Committing to a resource grid drill campaign as the primary source of data collection will often bias data collection to a specific orientation and significantly reduce the capability to identify and qualify complexity. Consequently, these limitations make it difficult to construct a realistic and coherent structural model that identifies adverse structural domains. Without the consideration of complexity and the capability of capturing these structural domains, mining operations run the risk of inadequately designed slopes that may fail and potentially harm people. Regional structural trends have been considered in conjunction with surface and in-pit mapping data to model multi-batter fold structures that were absent from previous iterations of the structural model. The risk is evident in newly identified dip-slope and rock-mass controlled sectors of the geotechnical design rather than a ubiquitous dip-slope sector across the pit. The reward is two-fold: 1) providing sectors of rock-mass controlled design in previously interpreted structurally controlled domains and 2) the opportunity to optimize the slope angle for mineral recovery and reduced strip ratio. Furthermore, a resulting high confidence model with structures and geometries that can account for historic slope instabilities in structurally controlled domains where design assumptions failed.

Keywords: structural geology, geotechnical design, optimization, slope stability, risk mitigation

Procedia PDF Downloads 22
359 Clostridium thermocellum DBT-IOC-C19, A Potential CBP Isolate for Ethanol Production

Authors: Nisha Singh, Munish Puri, Collin Barrow, Deepak Tuli, Anshu S. Mathur

Abstract:

The biological conversion of lignocellulosic biomass to ethanol is a promising strategy to solve the present global crisis of exhausting fossil fuels. The existing bioethanol production technologies have cost constraints due to the involvement of mandate pretreatment and extensive enzyme production steps. A unique process configuration known as consolidated bioprocessing (CBP) is believed to be a potential cost-effective process due to its efficient integration of enzyme production, saccharification, and fermentation into one step. Due to several favorable reasons like single step conversion, no need of adding exogenous enzymes and facilitated product recovery, CBP has gained the attention of researchers worldwide. However, there are several technical and economic barriers which need to be overcome for making consolidated bioprocessing a commercially viable process. Finding a natural candidate CBP organism is critically important and thermophilic anaerobes are preferred microorganisms. The thermophilic anaerobes that can represent CBP mainly belong to genus Clostridium, Caldicellulosiruptor, Thermoanaerobacter, Thermoanaero bacterium, and Geobacillus etc. Amongst them, Clostridium thermocellum has received increased attention as a high utility CBP candidate due to its highest growth rate on crystalline cellulose, the presence of highly efficient cellulosome system and ability to produce ethanol directly from cellulose. Recently with the availability of genetic and molecular tools aiding the metabolic engineering of Clostridium thermocellum have further facilitated the viability of commercial CBP process. With this view, we have specifically screened cellulolytic and xylanolytic thermophilic anaerobic ethanol producing bacteria, from unexplored hot spring/s in India. One of the isolates is a potential CBP organism identified as a new strain of Clostridium thermocellum. This strain has shown superior avicel and xylan degradation under unoptimized conditions compared to reported wild type strains of Clostridium thermocellum and produced more than 50 mM ethanol in 72 hours from 1 % avicel at 60°C. Besides, this strain shows good ethanol tolerance and growth on both hexose and pentose sugars. Hence, with further optimization this new strain could be developed as a potential CBP microbe.

Keywords: Clostridium thermocellum, consolidated bioprocessing, ethanol, thermophilic anaerobes

Procedia PDF Downloads 388
358 Literature Review on the Controversies and Changes in the Insanity Defense since the Wild Beast Standard in 1723 until the Federal Insanity Defense Reform Act of 1984

Authors: Jane E. Hill

Abstract:

Many variables led to the changes in the insanity defense since the Wild Beast Standard of 1723 until the Federal Insanity Defense Reform Act of 1984. The insanity defense is used in criminal trials and argued that the defendant is ‘not guilty by reason of insanity’ because the individual was unable to distinguish right from wrong during the time they were breaking the law. The issue that surrounds whether or not to use the insanity defense in the criminal court depends on the mental state of the defendant at the time the criminal act was committed. This leads us to the question of did the defendant know right from wrong when they broke the law? In 1723, The Wild Beast Test stated that to be exempted from punishment the individual is totally deprived of their understanding and memory and doth not know what they are doing. The Wild Beast Test became the standard in England for over seventy-five years. In 1800, James Hadfield attempted to assassinate King George III. He only made the attempt because he was having delusional beliefs. The jury and the judge gave a verdict of not guilty. However, to legal confine him; the Criminal Lunatics Act was enacted. Individuals that were deemed as ‘criminal lunatics’ and were given a verdict of not guilty would be taken into custody and not be freed into society. In 1843, the M'Naghten test required that the individual did not know the quality or the wrongfulness of the offense at the time they committed the criminal act(s). Daniel M'Naghten was acquitted on grounds of insanity. The M'Naghten Test is still a modern concept of the insanity defense used in many courts today. The Irresistible Impulse Test was enacted in the United States in 1887. The Irresistible Impulse Test suggested that offenders that could not control their behavior while they were committing a criminal act were not deterrable by the criminal sanctions in place; therefore no purpose would be served by convicting the offender. Due to the criticisms of the latter two contentions, the federal District of Columbia Court of Appeals ruled in 1954 to adopt the ‘product test’ by Sir Isaac Ray for insanity. The Durham Rule also known as the ‘product test’, stated an individual is not criminally responsible if the unlawful act was the product of mental disease or defect. Therefore, the two questions that need to be asked and answered are (1) did the individual have a mental disease or defect at the time they broke the law? and (2) was the criminal act the product of their disease or defect? The Durham courts failed to clearly define ‘mental disease’ or ‘product.’ Therefore, trial courts had difficulty defining the meaning of the terms and the controversy continued until 1972 when the Durham rule was overturned in most places. Therefore, the American Law Institute combined the M'Naghten test with the irresistible impulse test and The United States Congress adopted an insanity test for the federal courts in 1984.

Keywords: insanity defense, psychology law, The Federal Insanity Defense Reform Act of 1984, The Wild Beast Standard in 1723

Procedia PDF Downloads 126
357 Product Separation of Green Processes and Catalyst Recycling of a Homogeneous Polyoxometalate Catalyst Using Nanofiltration Membranes

Authors: Dorothea Voß, Tobias Esser, Michael Huber, Jakob Albert

Abstract:

The growing world population and the associated increase in demand for energy and consumer goods, as well as increasing waste production, requires the development of sustainable processes. In addition, the increasing environmental awareness of our society is a driving force for the requirement that processes must be as resource and energy efficient as possible. In this context, the use of polyoxometalate catalysts (POMs) has emerged as a promising approach for the development of green processes. POMs are bifunctional polynuclear metal-oxo-anion cluster characterized by a strong Brønsted acidity, a high proton mobility combined with fast multi-electron transfer and tunable redox potential. In addition, POMs are soluble in many commonly known solvents and exhibit resistance to hydrolytic and oxidative degradation. Due to their structure and excellent physicochemical properties, POMs are efficient acid and oxidation catalysts that have attracted much attention in recent years. Oxidation processes with molecular oxygen are worth mentioning here. However, the fact that the POM catalysts are homogeneous poses a challenge for downstream processing of product solutions and recycling of the catalysts. In this regard, nanofiltration membranes have gained increasing interest in recent years, particularly due to their relative sustainability advantage over other technologies and their unique properties such as increased selectivity towards multivalent ions. In order to establish an efficient downstream process for the highly selective separation of homogeneous POM catalysts from aqueous solutions using nanofiltration membranes, a laboratory-scale membrane system was designed and constructed. By varying various process parameters, a sensitivity analysis was performed on a model system to develop an optimized method for the recovery of POM catalysts. From this, process-relevant key figures such as the rejection of various system components were derived. These results form the basis for further experiments on other systems to test the transferability to serval separation tasks with different POMs and products, as well as for recycling experiments of the catalysts in processes on laboratory scale.

Keywords: downstream processing, nanofiltration, polyoxometalates, homogeneous catalysis, green chemistry

Procedia PDF Downloads 75
356 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction

Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong

Abstract:

Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.

Keywords: data refinement, machine learning, mutual information, short-term latency prediction

Procedia PDF Downloads 156
355 Investigation of the IL23R Psoriasis/PsA Susceptibility Locus

Authors: Shraddha Rane, Richard Warren, Stephen Eyre

Abstract:

L-23 is a pro-inflammatory molecule that signals T cells to release cytokines such as IL-17A and IL-22. Psoriasis is driven by a dysregulated immune response, within which IL-23 is now thought to play a key role. Genome-wide association studies (GWAS) have identified a number of genetic risk loci that support the involvement of IL-23 signalling in psoriasis; in particular a robust susceptibility locus at a gene encoding a subunit of the IL-23 receptor (IL23R) (Stuart et al., 2015; Tsoi et al., 2012). The lead psoriasis-associated SNP rs9988642 is located approximately 500 bp downstream of IL23R but is in tight linkage disequilibrium (LD) with a missense SNP rs11209026 (R381Q) within IL23R (r2 = 0.85). The minor (G) allele of rs11209026 is present in approximately 7% of the population and is protective for psoriasis and several other autoimmune diseases including IBD, ankylosing spondylitis, RA and asthma. The psoriasis-associated missense SNP R381Q causes an arginine to glutamine substitution in a region of the IL23R protein between the transmembrane domain and the putative JAK2 binding site in the cytoplasmic portion. This substitution is expected to affect the receptor’s surface localisation or signalling ability, rather than IL23R expression. Recent studies have also identified a psoriatic arthritis (PsA)-specific signal at IL23R; thought to be independent from the psoriasis association (Bowes et al., 2015; Budu-Aggrey et al., 2016). The lead PsA-associated SNP rs12044149 is intronic to IL23R and is in LD with likely causal SNPs intersecting promoter and enhancer marks in memory CD8+ T cells (Budu-Aggrey et al., 2016). It is therefore likely that the PsA-specific SNPs affect IL23R function via a different mechanism compared with the psoriasis-specific SNPs. It could be hypothesised that the risk allele for PsA located within the IL23R promoter causes an increase IL23R expression, relative to the protective allele. An increased expression of IL23R might then lead to an exaggerated immune response. The independent genetic signals identified for psoriasis and PsA in this locus indicate that different mechanisms underlie these two conditions; although likely both affecting the function of IL23R. It is very important to further characterise these mechanisms in order to better understand how the IL-23 receptor and its downstream signalling is affected in both diseases. This will help to determine how psoriasis and PsA patients might differentially respond to therapies, particularly IL-23 biologics. To investigate this further we have developed an in vitro model using CD4 T cells which express either wild type IL23R and IL12Rβ1 or mutant IL23R (R381Q) and IL12Rβ1. Model expressing different isotypes of IL23R is also underway to investigate the effects on IL23R expression. We propose to further investigate the variants for Ps and PsA and characterise key intracellular processes related to the variants.

Keywords: IL23R, psoriasis, psoriatic arthritis, SNP

Procedia PDF Downloads 151
354 Thermal Analysis of Adsorption Refrigeration System Using Silicagel–Methanol Pair

Authors: Palash Soni, Vivek Kumar Gaba, Shubhankar Bhowmick, Bidyut Mazumdar

Abstract:

Refrigeration technology is a fast developing field at the present era since it has very wide application in both domestic and industrial areas. It started from the usage of simple ice coolers to store food stuffs to the present sophisticated cold storages along with other air conditioning system. A variety of techniques are used to bring down the temperature below the ambient. Adsorption refrigeration technology is a novel, advanced and promising technique developed in the past few decades. It gained attention due to its attractive property of exploiting unlimited natural sources like solar energy, geothermal energy or even waste heat recovery from plants or from the exhaust of locomotives to fulfill its energy need. This will reduce the exploitation of non-renewable resources and hence reduce pollution too. This work is aimed to develop a model for a solar adsorption refrigeration system and to simulate the same for different operating conditions. In this system, the mechanical compressor is replaced by a thermal compressor. The thermal compressor uses renewable energy such as solar energy and geothermal energy which makes it useful for those areas where electricity is not available. Refrigerants normally in use like chlorofluorocarbon/perfluorocarbon have harmful effects like ozone depletion and greenhouse warming. It is another advantage of adsorption systems that it can replace these refrigerants with less harmful natural refrigerants like water, methanol, ammonia, etc. Thus the double benefit of reduction in energy consumption and pollution can be achieved. A thermodynamic model was developed for the proposed adsorber, and a universal MATLAB code was used to simulate the model. Simulations were carried out for a different operating condition for the silicagel-methanol working pair. Various graphs are plotted between regeneration temperature, adsorption capacities, the coefficient of performance, desorption rate, specific cooling power, adsorption/desorption times and mass. The results proved that adsorption system could be installed successfully for refrigeration purpose as it has saving in terms of power and reduction in carbon emission even though the efficiency is comparatively less as compared to conventional systems. The model was tested for its compliance in a cold storage refrigeration with a cooling load of 12 TR.

Keywords: adsorption, refrigeration, renewable energy, silicagel-methanol

Procedia PDF Downloads 192
353 Developing Improvements to Multi-Hazard Risk Assessments

Authors: A. Fathianpour, M. B. Jelodar, S. Wilkinson

Abstract:

This paper outlines the approaches taken to assess multi-hazard assessments. There is currently confusion in assessing multi-hazard impacts, and so this study aims to determine which of the available options are the most useful. The paper uses an international literature search, and analysis of current multi-hazard assessments and a case study to illustrate the effectiveness of the chosen method. Findings from this study will help those wanting to assess multi-hazards to undertake a straightforward approach. The paper is significant as it helps to interpret the various approaches and concludes with the preferred method. Many people in the world live in hazardous environments and are susceptible to disasters. Unfortunately, when a disaster strikes it is often compounded by additional cascading hazards, thus people would confront more than one hazard simultaneously. Hazards include natural hazards (earthquakes, floods, etc.) or cascading human-made hazards (for example, Natural Hazard Triggering Technological disasters (Natech) such as fire, explosion, toxic release). Multi-hazards have a more destructive impact on urban areas than one hazard alone. In addition, climate change is creating links between different disasters such as causing landslide dams and debris flows leading to more destructive incidents. Much of the prevailing literature deals with only one hazard at a time. However, recently sophisticated multi-hazard assessments have started to appear. Given that multi-hazards occur, it is essential to take multi-hazard risk assessment under consideration. This paper aims to review the multi-hazard assessment methods through articles published to date and categorize the strengths and disadvantages of using these methods in risk assessment. Napier City is selected as a case study to demonstrate the necessity of using multi-hazard risk assessments. In order to assess multi-hazard risk assessments, first, the current multi-hazard risk assessment methods were described. Next, the drawbacks of these multi-hazard risk assessments were outlined. Finally, the improvements to current multi-hazard risk assessments to date were summarised. Generally, the main problem of multi-hazard risk assessment is to make a valid assumption of risk from the interactions of different hazards. Currently, risk assessment studies have started to assess multi-hazard situations, but drawbacks such as uncertainty and lack of data show the necessity for more precise risk assessment. It should be noted that ignoring or partial considering multi-hazards in risk assessment will lead to an overestimate or overlook in resilient and recovery action managements.

Keywords: cascading hazards, disaster assessment, mullti-hazards, risk assessment

Procedia PDF Downloads 94
352 Desulphurization of Waste Tire Pyrolytic Oil (TPO) Using Photodegradation and Adsorption Techniques

Authors: Moshe Mello, Hilary Rutto, Tumisang Seodigeng

Abstract:

The nature of tires makes them extremely challenging to recycle due to the available chemically cross-linked polymer and, therefore, they are neither fusible nor soluble and, consequently, cannot be remolded into other shapes without serious degradation. Open dumping of tires pollutes the soil, contaminates underground water and provides ideal breeding grounds for disease carrying vermins. The thermal decomposition of tires by pyrolysis produce char, gases and oil. The composition of oils derived from waste tires has common properties to commercial diesel fuel. The problem associated with the light oil derived from pyrolysis of waste tires is that it has a high sulfur content (> 1.0 wt.%) and therefore emits harmful sulfur oxide (SOx) gases to the atmosphere when combusted in diesel engines. Desulphurization of TPO is necessary due to the increasing stringent environmental regulations worldwide. Hydrodesulphurization (HDS) is the commonly practiced technique for the removal of sulfur species in liquid hydrocarbons. However, the HDS technique fails in the presence of complex sulfur species such as Dibenzothiopene (DBT) present in TPO. This study aims to investigate the viability of photodegradation (Photocatalytic oxidative desulphurization) and adsorptive desulphurization technologies for efficient removal of complex and non-complex sulfur species in TPO. This study focuses on optimizing the cleaning (removal of impurities and asphaltenes) process by varying process parameters; temperature, stirring speed, acid/oil ratio and time. The treated TPO will then be sent for vacuum distillation to attain the desired diesel like fuel. The effect of temperature, pressure and time will be determined for vacuum distillation of both raw TPO and the acid treated oil for comparison purposes. Polycyclic sulfides present in the distilled (diesel like) light oil will be oxidized dominantly to the corresponding sulfoxides and sulfone via a photo-catalyzed system using TiO2 as a catalyst and hydrogen peroxide as an oxidizing agent and finally acetonitrile will be used as an extraction solvent. Adsorptive desulphurization will be used to adsorb traces of sulfurous compounds which remained during photocatalytic desulphurization step. This desulphurization convoy is expected to give high desulphurization efficiency with reasonable oil recovery.

Keywords: adsorption, asphaltenes, photocatalytic oxidation, pyrolysis

Procedia PDF Downloads 255
351 Effects of Covid-19 pandemic in Japan on Japanese People’s and Expatriates’ Lifestyles

Authors: Noriyuki Suyama

Abstract:

This paper looked into consumer behavioral changes by analyzing the data collected by ASMARKS Co., one of a research companies in Japan. The purpose of the paper is to understand the two differences of before vs. after COVID-19 pandemic and Japanese living in Japan. Subsequently, examining the analysis results helped obtain useful insights into new business models for business parties in Japan as a microlevel perspective. The paper also tried to explore future conditions of globalization by taking into consideration nation’s political and economic changes as a macro-level perspective. The COVID-19 has been continuing its spread across the world with more than 60 million confirmed cases in 190 countries. This pandemic with restricted scopes of behavior mandates have disrupted the consumer habits of their lifestyles. Consumers have tendency to learn new ways when they have trouble in taking routine action. For example, the government forces people to refrain from going out, they try to telecommute at home. If the situation come back to normal, people still change their lifestyles to fit in the best. Some of data show typical effects of COVID-19; forceful exposure to digitalized work-life styles; more flexible time at home; importance of trustful and useful information gathering between what's good and bad;etc. in comparison with before vs. after COVID-19 pandemic. In addition, Japanese have less changed their lifestyles than Expatriates living in Japan. For example, while 94% of the expatriates have decreased their outgo because of self-quarantine, only 55% of the Japanese have done. There are more differences in both comparisons in the analysis results. The economic downtrend resulting from COVID-19 is supposed to be at least as devastating if not more so than that of the financial crisis. With unemployment levels in the US taking two weeks to reach what took 6 months in the 2008 crisis, there is no doubt of a global recession some predict could reach 10% or above of GDP. As a result, globalization in the global supply chain of goods and services will end up with negative impact. A lot of governmental financial and economic policies are supposed to focus on their own profits and interests, exclusing other countries interests as is the case with the Recovery Act just after the global financial crisis from 2007 to 2008. Both micro- and macro-levels analysis successfully reveal important connotations and managerial implications of business in Japan for Japanese consumers as well as after COVID-19 global business.

Keywords: COVID-19, lifestyle in Japan, expatriates, consumer behavior

Procedia PDF Downloads 126
350 Surface Display of Lipase on Yarrowia lipolytica Cells

Authors: Evgeniya Y. Yuzbasheva, Tigran V. Yuzbashev, Natalia I. Perkovskaya, Elizaveta B. Mostova

Abstract:

Cell-surface display of lipase is of great interest as it has many applications in the field of biotechnology owing to its unique advantages: simplified product purification, and cost-effective downstream processing. One promising area of application for whole-cell biocatalysts with surface displayed lipase is biodiesel synthesis. Biodiesel is biodegradable, renewable, and nontoxic alternative fuel for diesel engines. Although the alkaline catalysis method has been widely used for biodiesel production, it has a number of limitations, such as rigorous feedstock specifications, complicated downstream processes, including removal of inorganic salts from the product, recovery of the salt-containing by-product glycerol, and treatment of alkaline wastewater. Enzymatic synthesis of biodiesel can overcome these drawbacks. In this study, Lip2p lipase was displayed on Yarrowia lipolytica cells via C- and N-terminal fusion variant. The active site of lipase is located near the C-terminus, therefore to prevent the activity loosing the insertion of glycine-serine linker between Lip2p and C-domains was performed. The hydrolytic activity of the displayed lipase reached 12,000–18,000 U/g of dry weight. However, leakage of enzyme from the cell wall was observed. In case of C-terminal fusion variant, the leakage was occurred due to the proteolytic cleavage within the linker peptide. In case of N-terminal fusion variant, the leaking enzyme was presented as three proteins, one of which corresponded to the whole hybrid protein. The calculated number of recombinant enzyme displayed on the cell surface is approximately 6–9 × 105 molecules per cell, which is close to the theoretical maximum (2 × 106 molecules/cell). Thus, we attribute the enzyme leakage to the limited space available on the cell surface. Nevertheless, cell-bound lipase exhibited greater stability to short-term and long-term temperature treatment than the native enzyme. It retained 74% of original activity at 60°C for 5 min of incubation, and 83% of original activity after incubation at 50°C during 5 h. Cell-bound lipase had also higher stability in organic solvents and detergents. The developed whole-cell biocatalyst was used for recycling biodiesel synthesis. Two repeated cycles of methanolysis yielded 84.1–% and 71.0–% methyl esters after 33–h and 45–h reactions, respectively.

Keywords: biodiesel, cell-surface display, lipase, whole-cell biocatalyst

Procedia PDF Downloads 471
349 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 113
348 Postoperative Wound Infections Following Caesarean Section in Obese Patients

Authors: S. Yeo, M. Mathur

Abstract:

Introduction: Obesity, defined as a Body Mass Index (BMI) of more than or equal to 30kg/m, is associated with an increased risk of complications during pregnancy and delivery. During labour, obese mothers often require greater intervention and have higher rates of caesarean section. Despite a low overall rate of serious complications following caesarean section, a high BMI predisposes to a higher risk of postoperative complications. Our study, therefore, aimed to investigate the impact of antenatal obesity on adverse outcomes following caesarean section, particularly wound-related infections. Materials and Methods: A retrospective cohort study of all caesarean deliveries during the first quarter of a chosen year was undertaken in our hospital, which is a tertiary referral centre with > 12,000 deliveries per year. Patients’ health records and data from our hospital’s electronic labour and delivery database were reviewed. Data analysis was performed using the Statistical Package for the Social Sciences (SPSS), and odds ratios plus adjusted odd ratios were calculated with 95% confidence intervals (CI). Results: A total of 1829 deliveries were reviewed during our study period. Of these, 180 (9.8%) patients were obese. The rate of caesarean delivery was 48.9% in obese patients versus 28.1% in non-obese patients. Post-operatively, 17% of obese patients experienced wound infection versus 0.2% of non-obese patients. Obese patients were also more likely to experience major postpartum haemorrhage (4.6% vs. 0.2%) and postpartum pyrexia (18.2% vs. 5.0%) in comparison to non-obese patients. Conclusions: Obesity is a significant risk factor in the development of postoperative complications following caesarean section. Wound infection remains a major concern for obese patients undergoing major surgery and results in extensive morbidity during the postnatal period. Postpartum infection can prolong recovery and affect maternal mental health, leading to reduced perinatal bonding with long-term implications on breastfeeding and parenting confidence. This study supports the need for the development of standardized protocols specifically for obese patients undergoing caesarean section. Multidisciplinary team care, in conjunction with anaesthesia, family physicians, and plastic surgery counterparts, early on in the antenatal journey, may be beneficial where wound complications are anticipated and to minimize the burden of postoperative infection in obese mothers.

Keywords: pregnancy, obesity, caesarean, infection

Procedia PDF Downloads 63
347 Evaluating the Efficacy of Tasquinimod in Covid-19

Authors: Raphael Udeh, Luis García De Guadiana Romualdo, Xenia Dolje-Gore

Abstract:

Background: Quite disturbing is the huge public health impact of COVID-19: As at today [25th March 2021, the COVID-19 global burden shows over 123 million cases and over 2.7 million deaths worldwide. Rationale: Recent evidence shows calprotectin’s potential as a therapeutic target, stating that tasquinimod, from the Quinoline-3-Carboxamide family is capable of blocking the interaction between calprotectin and TLR4. Hence preventing the cytokine release syndrome, that heralds the functional exhaustion in COVID-19. Early preclinical studies showed that tasquinimod inhibit tumor growth and prevent angiogenesis/cytokine storm. Phase I – III clinical studies in prostate cancer showed it has a good safety profile with good radiologic progression free survival but no effect on overall survival. Rationale/hypothesis: Strategic endeavors have been amplified globally to assess new therapeutic interventions for COVID-19 management – thus the clinical and antiviral efficacy of tasquinimod in COVID-19 remains to be explored. Hence the primary objective of this trial will be to evaluate the efficacy of tasquinimod in the treatment of adult patients with severe COVID-19 infections. Therefore, I hypothesise that among adults with COVID19 infection, tasquinimod will reduce the severe respiratory distress associated with COVID-19 compared to placebo, over a 28-day study period. Method: The setting is in Europe. Design – a randomized, placebo-controlled, phase II double-blinded trial. Trial lasts for 28 days from randomization, Tasquinimod capsule given as 0.5mg daily 1st fortnight, then 1mg daily 2nd fortnight. I0 outcome - assessed using six-point ordinal scale alongside eight 20 outcomes. 125 participants to be enrolled, data collection at baseline and subsequent data points, and safety reporting monitored via serological profile. Significance: This work could potentially establish tasquinimod as an effective and safe therapeutic agent for COVID-19 by reducing the severe respiratory distress, related time to recovery, time on oxygen/admission. It will also drive future research – as in larger multi-centre RCT.

Keywords: Calprotectin, COVID-19, Phase II Trial, Tasquinimod

Procedia PDF Downloads 178
346 Performance Evaluation of On-Site Sewage Treatment System (Johkasou)

Authors: Aashutosh Garg, Ankur Rajpal, A. A. Kazmi

Abstract:

The efficiency of an on-site wastewater treatment system named Johkasou was evaluated based on its pollutant removal efficiency over 10 months. This system was installed at IIT Roorkee and had a capacity of treating 7 m3/d of sewage water, sufficient for a group of 30-50 people. This system was fed with actual wastewater through an equalization tank to eliminate the fluctuations throughout the day. Methanol and ammonium chloride was added into this equalization tank to increase the Chemical Oxygen Demand (COD) and ammonia content of the influent. The outlet from Johkasou is sent to a tertiary unit consisting of a Pressure Sand Filter and an Activated Carbon Filter for further treatment. Samples were collected on alternate days from Monday to Friday and the following parameters were evaluated: Chemical Oxygen Demand (COD), Biochemical Oxygen Demand (BOD), Total Suspended Solids (TSS), and Total Nitrogen (TN). The Average removal efficiency for Chemical Oxygen Demand (COD), Biochemical Oxygen Demand (BOD), Total Suspended Solids (TSS), and Total Nitrogen (TN) was observed as 89.6, 97.7, 96, and 80% respectively. The cost of treating the wastewater comes out to be Rs 23/m3 which includes electricity, cleaning and maintenance, chemical, and desludging costs. Tests for the coliforms were also performed and it was observed that the removal efficiency for total and fecal coliforms was 100%. The sludge generation rate is approximately 20% of the BOD removal and it needed to be removed twice a year. It also showed a very good response against the hydraulic shock load. We performed vacation stress analysis on the system to evaluate the performance of the system when there is no influent for 8 consecutive days. From the result of stress analysis, we concluded that system needs a recovery time of about 48 hours to stabilize. After about 2 days, the system returns again to original conditions and all the parameters in the effluent become within the limits of National Green Tribunal (NGT) standards. We also performed another stress analysis to save the electricity in which we turned the main aeration blower off for 2 to 12 hrs a day and the results showed that we can turn the blower off for about 4-6 hrs a day and this will help in reducing the electricity costs by about 25%. It was concluded that the Johkasou system can remove a sufficient amount of all the physiochemical parameters tested to satisfy the prescribed limit set as per Indian Standard.

Keywords: on-site treatment, domestic wastewater, Johkasou, nutrient removal, pathogens removal

Procedia PDF Downloads 101
345 Effect of Weave on Cotton Fabric to Improve the Durable Press Finish Rating

Authors: Mayur Kudale, Priyanka Panchal

Abstract:

Cellulose fibres, mainly cotton, are the most important kind of fibre used for manufacturing shirting fabric. However, to overcome its main disadvantage, that is it gets wrinkled after washing, is to use special kind of finish which is resin finish. This finish provides a resistance against shrinkage along with improved wet and dry wrinkle recovery to cellulosic textiles. The Durable Press (DP) finish uses a mechanism of cross-linking with polymers or resin to inhibit the easy movement of the cellulose chains. The purpose of these experimentations on the weave is to observe and compare the variations in properties after DP finish without adverse effect on strength of the fabric. In this work, we have prepared three types of fabric weaves viz. Plain, Twill and Sateen with their construction parameters intact. To get the projected results, this work uses three types of variables viz. concentration of Resin, Temperature and Time. Resultant of these variables is only change in weave or construction on DP finish which further opens the possibilities of improvement of DP either of mentioned weaves. The combined effect of such various parametric resin finish methodology will give the best method to improve the DP. However, the DP finish can cause a side effect of reduction in elasticity and flexibility of cellulosic fibres. The natural cellulose could loss abrasion resistance along with tear and tensile strength by applying DP finish. In this work, it is taken care that the tear strength of fabric will not drop below certain limit otherwise the fabric will tear down easily. In this work, it is found that there is a significant drop in tearing and tensile strength with the improvement of DP finish. Later on, it is also found that the twill weave has more percentage drop in tearing strength as compared to plain and sateen weave. There is major kind of observations obtained after this work. First, the mixing of cotton should be done properly to achieve the higher DP rating in plain weave. Second, the careful combination of warp, weft and fabric construction must be decided to avoid the high drop in tear and tensile strength in a twill weave. Third, the sateen weave has a good sheen and DP rating hence it can be used in shirting of gents and ladies dress materials. This concludes that to achieve higher DP ratings, use plain weave construction than twill and sateen because it has the lowest tear and tensile strength drop.

Keywords: concentration of resin, cross-linking, durable press (DP) finish, sheen, tear and tensile strength, weave

Procedia PDF Downloads 290
344 Destigmatising Generalised Anxiety Disorder: The Differential Effects of Causal Explanations on Stigma

Authors: John McDowall, Lucy Lightfoot

Abstract:

Stigma constitutes a significant barrier to the recovery and social integration of individuals affected by mental illness. Although there is some debate in the literature regarding the definition and utility of stigma as a concept, it is widely accepted that it comprises three components: stereotypical beliefs, prejudicial reactions, and discrimination. Stereotypical beliefs describe the cognitive knowledge-based component of stigma, referring to beliefs (often negative) about members of a group that is based on cultural and societal norms (e.g. ‘People with anxiety are just weak’). Prejudice refers to the affective/evaluative component of stigma and describes the endorsement of negative stereotypes and the resulting negative emotional reactions (e.g. ‘People with anxiety are just weak, and they frustrate me’). Discrimination refers to the behavioural component of stigma, which is arguably the most problematic, as it exerts a direct effect on the stigmatized person and may lead people to behave in a hostile or avoidant way towards them (i.e. refusal to hire them). Research exploring anti-stigma initiatives focus primarily on an educational approach, with the view that accurate information will replace misconceptions and decrease stigma. Many approaches take a biogenetic stance, emphasising brain and biochemical deficits - the idea being that ‘mental illness is an illness like any other.' While this approach tends to effectively reduce blame, it has also demonstrated negative effects such as increasing prognostic pessimism, the desire for social distance and perceptions of stereotypes. In the present study 144 participants were split into three groups and read one of three vignettes presenting causal explanations for Generalised Anxiety Disorder (GAD): One explanation emphasized biogenetic factors as being important in the etiology of GAD, another emphasised psychosocial factors (e.g. aversive life events, poverty, etc.), and a third stressed the adaptive features of the disorder from an evolutionary viewpoint. A variety of measures tapping the various components of stigma were administered following the vignettes. No difference in stigma measures as a function of causal explanation was found. People who had contact with mental illness in the past were significantly less stigmatising across a wide range of measures, but this did not interact with the type of causal explanation.

Keywords: generalised anxiety disorder, discrimination, prejudice, stigma

Procedia PDF Downloads 272
343 Development, Characterization and Performance Evaluation of a Weak Cation Exchange Hydrogel Using Ultrasonic Technique

Authors: Mohamed H. Sorour, Hayam F. Shaalan, Heba A. Hani, Eman S. Sayed, Amany A. El-Mansoup

Abstract:

Heavy metals (HMs) present an increasing threat to aquatic and soil environment. Thus, techniques should be developed for the removal and/or recovery of those HMs from point sources in the generating industries. This paper reports our endeavors concerning the development of in-house developed weak cation exchange polyacrylate hydrogel kaolin composites for heavy metals removal. This type of composite enables desirable characteristics and functions including mechanical strength, bed porosity and cost advantages. This paper emphasizes the effect of varying crosslinker (methylenebis(acrylamide)) concentration. The prepared cation exchanger has been subjected to intensive characterization using X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF) and Brunauer Emmett and Teller (BET) method. Moreover, the performance was investigated using synthetic and real wastewater for an industrial complex east of Cairo. Simulated and real wastewater compositions addressed; Cr, Co, Ni, and Pb are in the range of (92-115), (91-103), (86-88) and (99-125), respectively. Adsorption experiments have been conducted in both batch and column modes. In general, batch tests revealed enhanced cation exchange capacities of 70, 72, 78.2 and 99.9 mg/g from single synthetic wastes while, removal efficiencies of 82.2, 86.4, 44.4 and 96% were obtained for Cr, Co, Ni and Pb, respectively from mixed synthetic wastes. It is concluded that the mixed synthetic and real wastewaters have lower adsorption capacities than single solutions. It is worth mentioned that Pb attained higher adsorption capacities with comparable results in all tested concentrations of synthetic and real wastewaters. Pilot scale experiments were also conducted for mixed synthetic waste in a fluidized bed column for 48 hour cycle time which revealed 86.4%, 58.5%, 66.8% and 96.9% removal efficiency for Cr, Co, Ni, and Pb, respectively with maximum regeneration was also conducted using saline and acid regenerants. Maximum regeneration efficiencies for the column studies higher than the batch ones about by about 30% to 60%. Studies are currently under way to enhance the regeneration efficiency to enable successful scaling up of the adsorption column.

Keywords: polyacrylate hydrogel kaolin, ultrasonic irradiation, heavy metals, adsorption and regeneration

Procedia PDF Downloads 106
342 Specification Requirements for a Combined Dehumidifier/Cooling Panel: A Global Scale Analysis

Authors: Damien Gondre, Hatem Ben Maad, Abdelkrim Trabelsi, Frédéric Kuznik, Joseph Virgone

Abstract:

The use of a radiant cooling solution would enable to lower cooling needs which is of great interest when the demand is initially high (hot climate). But, radiant systems are not naturally compatibles with humid climates since a low-temperature surface leads to condensation risks as soon as the surface temperature is close to or lower than the dew point temperature. A radiant cooling system combined to a dehumidification system would enable to remove humidity for the space, thereby lowering the dew point temperature. The humidity removal needs to be especially effective near the cooled surface. This requirement could be fulfilled by a system using a single desiccant fluid for the removal of both excessive heat and moisture. This task aims at providing an estimation of the specification requirements of such system in terms of cooling power and dehumidification rate required to fulfill comfort issues and to prevent any condensation risk on the cool panel surface. The present paper develops a preliminary study on the specification requirements, performances and behavior of a combined dehumidifier/cooling ceiling panel for different operating conditions. This study has been carried using the TRNSYS software which allows nodal calculations of thermal systems. It consists of the dynamic modeling of heat and vapor balances of a 5m x 3m x 2.7m office space. In a first design estimation, this room is equipped with an ideal heating, cooling, humidification and dehumidification system so that the room temperature is always maintained in between 21C and 25C with a relative humidity in between 40% and 60%. The room is also equipped with a ventilation system that includes a heat recovery heat exchanger and another heat exchanger connected to a heat sink. Main results show that the system should be designed to meet a cooling power of 42W.m−2 and a desiccant rate of 45 gH2O.h−1. In a second time, a parametric study of comfort issues and system performances has been achieved on a more realistic system (that includes a chilled ceiling) under different operating conditions. It enables an estimation of an acceptable range of operating conditions. This preliminary study is intended to provide useful information for the system design.

Keywords: dehumidification, nodal calculation, radiant cooling panel, system sizing

Procedia PDF Downloads 157