Search results for: 7A6 series manipulator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2703

Search results for: 7A6 series manipulator

333 In situ Stabilization of Arsenic in Soils with Birnessite and Goethite

Authors: Saeed Bagherifam, Trevor Brown, Chris Fellows, Ravi Naidu

Abstract:

Over the last century, rapid urbanization, industrial emissions, and mining activities have resulted in widespread contamination of the environment by heavy metal(loid)s. Arsenic (As) is a toxic metalloid belonging to group 15 of the periodic table, which occurs naturally at low concentrations in soils and the earth’s crust, although concentrations can be significantly elevated in natural systems as a result of dispersion from anthropogenic sources, e.g., mining activities. Bioavailability is the fraction of a contaminant in soils that is available for uptake by plants, food chains, and humans and therefore presents the greatest risk to terrestrial ecosystems. Numerous attempts have been made to establish in situ and ex-situ technologies of remedial action for remediation of arsenic-contaminated soils. In situ stabilization techniques are based on deactivation or chemical immobilization of metalloid(s) in soil by means of soil amendments, which consequently reduce the bioavailability (for biota) and bioaccessibility (for humans) of metalloids due to the formation of low-solubility products or precipitates. This study investigated the effectiveness of two different types of synthetic manganese and iron oxides (birnessite and goethite) for stabilization of As in a soil spiked with 1000 mg kg⁻¹ of As and treated with 10% dosages of soil amendments. Birnessite was made using HCl and KMnO₄, and goethite was synthesized by the dropwise addition of KOH into Fe(NO₃) solution. The resulting contaminated soils were subjected to a series of chemical extraction studies including sequential extraction (BCR method), single-step extraction with distilled (DI) water, 2M HNO₃ and simplified bioaccessibility extraction tests (SBET) for estimation of bioaccessible fractions of As in two different soil fractions ( < 250 µm and < 2 mm). Concentrations of As in samples were measured using inductively coupled plasma mass spectrometry (ICP-MS). The results showed that soil with birnessite reduced bioaccessibility of As by up to 92% in both soil fractions. Furthermore, the results of single-step extractions revealed that the application of both birnessite and Goethite reduced DI water and HNO₃ extractable amounts of arsenic by 75, 75, 91, and 57%, respectively. Moreover, the results of the sequential extraction studies showed that both birnessite and goethite dramatically reduced the exchangeable fraction of As in soils. However, the amounts of recalcitrant fractions were higher in birnessite, and Goethite amended soils. The results revealed that the application of both birnessite and goethite significantly reduced bioavailability and the exchangeable fraction of As in contaminated soils, and therefore birnessite and Goethite amendments might be considered as promising adsorbents for stabilization and remediation of As contaminated soils.

Keywords: arsenic, bioavailability, in situ stabilisation, metalloid(s) contaminated soils

Procedia PDF Downloads 135
332 Theoretical Study of Gas Adsorption in Zirconium Clusters

Authors: Rasha Al-Saedi, Anthony Meijer

Abstract:

The progress of new porous materials has increased rapidly over the past decade for use in applications such as catalysis, gas storage and removal of environmentally unfriendly species due to their high surface area and high thermal stability. In this work, a theoretical study of the zirconium-based metal organic framework (MOFs) were examined in order to determine their potential for gas adsorption of various guest molecules: CO2, N2, CH4 and H2. The zirconium cluster consists of an inner Zr6O4(OH)4 core in which the triangular faces of the Zr6- octahedron are alternatively capped by O and OH groups which bound to nine formate groups and three benzoate groups linkers. General formula is [Zr(μ-O)4(μ-OH)4(HCOO)9((phyO2C)3X))] where X= CH2OH, CH2NH2, CH2CONH2, n(NH2); (n = 1-3). Three types of adsorption sites on the Zr metal center have been studied, named according to capped chemical groups as the ‘−O site’; the H of (μ-OH) site removed and added to (μ-O) site, ‘–OH site’; (μ-OH) site removed, the ‘void site’ where H2O molecule removed; (μ-OH) from one site and H from other (μ-OH) site, in addition to no defect versions. A series of investigations have been performed aiming to address this important issue. First, density functional theory DFT-B3LYP method with 6-311G(d,p) basis set was employed using Gaussian 09 package in order to evaluate the gas adsorption performance of missing-linker defects in zirconium cluster. Next, study the gas adsorption behaviour on different functionalised zirconium clusters. Those functional groups as mentioned above include: amines, alcohol, amide, in comparison with non-substitution clusters. Then, dispersion-corrected density functional theory (DFT-D) calculations were performed to further understand the enhanced gas binding on zirconium clusters. Finally, study the water effect on CO2 and N2 adsorption. The small functionalized Zr clusters were found to result in good CO2 adsorption over N2, CH4, and H2 due to the quadrupole moment of CO2 while N2, CH4 and H2 weakly polar or non-polar. The adsorption efficiency was determined using the dispersion method where the adsorption binding improved as most of the interactions, for example, van der Waals interactions are missing with the conventional DFT method. The calculated gas binding strengths on the no defect site are higher than those on the −O site, −OH site and the void site, this difference is especially notable for CO2. It has been stated that the enhanced affinity of CO2 of no defect versions is most likely due to the electrostatic interactions between the negatively charged O of CO2 and the positively charged H of (μ-OH) metal site. The uptake of the gas molecule does not enhance in presence of water as the latter binds to Zr clusters more strongly than gas species which attributed to the competition on adsorption sites.

Keywords: density functional theory, gas adsorption, metal- organic frameworks, molecular simulation, porous materials, theoretical chemistry

Procedia PDF Downloads 184
331 Pyridine-N-oxide Based AIE-active Triazoles: Synthesis, Morphology and Photophysical Properties

Authors: Luminita Marin, Dalila Belei, Carmen Dumea

Abstract:

Aggregation induced emission (AIE) is an intriguing optical phenomenon recently evidenced by Tang and his co-workers, for which aggregation works constructively in the improving of light emission. The AIE challenging phenomenon is quite opposite to the notorious aggregation caused quenching (ACQ) of light emission in the condensed phase, and comes in line with requirements of photonic and optoelectronic devices which need solid state emissive substrates. This paper reports a series of ten new aggregation induced emission (AIE) low molecular weight compounds based on triazole and pyridine-N-oxide heterocyclic units bonded by short flexible chains, obtained by a „click” chemistry reaction. The compounds present extremely weak luminescence in solution but strong light emission in solid state. To distinguish the influence of the crystallinity degree on the emission efficiency, the photophysical properties were explored by UV-vis and photoluminescence spectroscopy in solution, water suspension, amorphous and crystalline films. On the other hand, the compound morphology of the up mentioned states was monitored by dynamic light scattering, scanning electron microscopy, atomic force microscopy and polarized light microscopy methods. To further understand the structural design – photophysical properties relationship, single crystal X-ray diffraction on some understudy compounds was performed too. The UV-vis absorption spectra of the triazole water suspensions indicated a typical behaviour for nanoparticle formation, while the photoluminescence spectra revealed an emission intensity enhancement up to 921-fold higher of the crystalline films compared to solutions, clearly indicating an AIE behaviour. The compounds have the tendency to aggregate forming nano- and micro- crystals in shape of rose-like and fibres. The crystals integrity is kept due to the strong lateral intermolecular forces, while the absence of face-to-face forces explains the enhanced luminescence in crystalline state, in which the intramolecular rotations are restricted. The studied flexible triazoles draw attention to a new structural design in which small biologically friendly luminophore units are linked together by small flexible chains. This design enlarges the variety of the AIE luminogens to the flexible molecules, guiding further efforts in development of new AIE structures for appropriate applications, the biological ones being especially envisaged.

Keywords: aggregation induced emission, pyridine-N-oxide, triazole

Procedia PDF Downloads 467
330 Determinants of Budget Performance in an Oil-Based Economy

Authors: Adeola Adenikinju, Olusanya E. Olubusoye, Lateef O. Akinpelu, Dilinna L. Nwobi

Abstract:

Since the enactment of the Fiscal Responsibility Act (2007), the Federal Government of Nigeria (FGN) has made public its fiscal budget and the subsequent implementation report. A critical review of these documents shows significant variations in the five macroeconomic variables which are inputs in each Presidential budget; oil Production target (mbpd), oil price ($), Foreign exchange rate(N/$), and Gross Domestic Product growth rate (%) and inflation rate (%). This results in underperformance of the Federal budget expected output in terms of non-oil and oil revenue aggregates. This paper evaluates first the existing variance between budgeted and actuals, then the relationship and causality between the determinants of Federal fiscal budget assumptions, and finally the determinants of FGN’s Gross Oil Revenue. The paper employed the use of descriptive statistics, the Autoregressive distributed lag (ARDL) model, and a Profit oil probabilistic model to achieve these objectives. This model permits for both the static and dynamic effect(s) of the independent variable(s) on the dependent variable, unlike a static model that accounts for static or fixed effect(s) only. It offers a technique for checking the existence of a long-run relationship between variables, unlike other tests of cointegration, such as the Engle-Granger and Johansen tests, which consider only non-stationary series that are integrated of the same order. Finally, even with small sample size, the ARDL model is known to generate a valid result, for it is the dependent variable and is the explanatory variable. The results showed that there is a long-run relationship between oil revenue as a proxy for budget performance and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a short-run relationship between oil revenue and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a long-run relationship between non-oil revenue and its determinants; inflation rate, GDP growth rate, and foreign exchange rate. The grangers’ causality test results show that there is a mono-directional causality between oil revenue and its determinants. The Federal budget assumptions only explain 68% of oil revenue and 62% of non-oil revenue. There is a mono-directional causality between non-oil revenue and its determinants. The Profit oil Model describes production sharing contracts, joint ventures, and modified carrying arrangements as the greatest contributors to FGN’s gross oil revenue. This provides empirical justification for the selected macroeconomic variables used in the Federal budget design and performance evaluation. The research recommends other variables, debt and money supply, be included in the Federal budget design to explain the Federal budget revenue performance further.

Keywords: ARDL, budget performance, oil price, oil quantity, oil revenue

Procedia PDF Downloads 172
329 A Brief Review on the Relationship between Pain and Sociology

Authors: Hanieh Sakha, Nader Nader, Haleh Farzin

Abstract:

Introduction: Throughout history, pain theories have been supposed by biomedicine, especially regarding its diagnosis and treatment aspects. Therefore, the feeling of pain is not only a personal experience and is affected by social background; therefore, it involves extensive systems of signals. The challenges in emotional and sentimental dimensions of pain originate from scientific medicine (i.e., the dominant theory is also referred to as the specificity theory); however, this theory has accepted some alterations by emerging physiology. Then, Von Frey suggested the theory of cutaneous senses (i.e., Muller’s concept: the common sensation of combined four major skin receptors leading to a proper sensation) 50 years after the specificity theory. The pain pathway was composed of spinothalamic tracts and thalamus with an inhibitory effect on the cortex. Pain is referred to as a series of unique experiences with various reasons and qualities. Despite the gate control theory, the biological aspect overcomes the social aspect. Vrancken provided a more extensive definition of pain and found five approaches: Somatico-technical, dualistic body-oriented, behaviorist, phenomenological, and consciousness approaches. The Western model combined physical, emotional, and existential aspects of the human body. On the other hand, Kotarba felt confused about the basic origins of chronic pain. Freund demonstrated and argued with Durkhemian about the sociological approach to emotions. Lynch provided a piece of evidence about the correlation between cardiovascular disease and emotionally life-threatening occurrences. Helman supposed a distinction between private and public pain. Conclusion: The consideration of the emotional aspect of pain could lead to effective, emotional, and social responses to pain. On the contrary, the theory of embodiment is based on the sociological view of health and illness. Social epidemiology shows an imbalanced distribution of health, illness, and disability among various social groups. The social support and socio-cultural level can result in several types of pain. It means the status of athletes might define their pain experiences. Gender is one of the important contributing factors affecting the type of pain (i.e., females are more likely to seek health services for pain relief.) Chronic non-cancer pain (CNCP) has become a serious public health issue affecting more than 70 million people globally. CNCP is a serious public health issue which is caused by the lack of awareness about chronic pain management among the general population.

Keywords: pain, sociology, sociological, body

Procedia PDF Downloads 70
328 The Development of an Anaesthetic Crisis Manual for Acute Critical Events: A Pilot Study

Authors: Jacklyn Yek, Clara Tong, Shin Yuet Chong, Yee Yian Ong

Abstract:

Background: While emergency manuals and cognitive aids (CA) have been used in high-hazard industries for decades, this has been a nascent field in healthcare. CAs can potentially offset the large cognitive load involved in crisis resource management and possibly facilitate the efficient performance of key steps in treatment. A crisis manual was developed based on local guidelines and the latest evidence-based information and introduced to a tertiary hospital setting in Singapore. Hence, the objective of this study is to evaluate the effectiveness of the crisis manual in guiding response and management of critical events. Methods: 7 surgical teams were recruited to participate in a series of simulated emergencies in high-fidelity operating room simulator over the period of April to June 2018. All teams consisted of a surgical consultant and medical officer/registrar, anesthesia consultant and medical officer/registrar; as well as a circulating, scrub and anesthetic nurse. Each team performed a simulated operation in which 1 or more of the crisis events occurred. The teams were randomly assigned to a scenario of the crisis manual and all teams were deemed to be equal in experience and knowledge. Before the simulation, teams were instructed on proper checklist use but the use of the checklist was optional. Results: 7 simulation sessions were performed, consisting of the following scenarios: Airway fire, Massive Transfusion Protocol, Malignant Hyperthermia, Eclampsia, and Difficult Airway. Out of the 7 surgical teams, 2 teams made use of the crisis manual – of which both teams had encountered a ‘Malignant Hyperthermia’ scenario. These team members reflected that the crisis manual assisted allowed them to work in a team, especially being able to involve the surgical doctors who were unfamiliar with the condition and management. A run chart plotted showed a possible upward trend, suggesting that with increasing awareness and training, staff would become more likely to initiate the use of the crisis manual. Conclusion: Despite the high volume load in this tertiary hospital, certain crises remain rare and clinicians are often caught unprepared. A crisis manual is an effective tool and easy-to-use repository that can improve patient outcome and encourage teamwork. With training, familiarity would allow clinicians to be increasingly comfortable with reaching out for the crisis manual. More simulation training would need to be conducted to determine its effectiveness.

Keywords: crisis resource management, high fidelity simulation training, medical errors, visual aids

Procedia PDF Downloads 127
327 Trends in All-Cause Mortality and Inpatient and Outpatient Visits for Ambulatory Care Sensitive Conditions during the First Year of the COVID-19 Pandemic: A Population-Based Study

Authors: Tetyana Kendzerska, David T. Zhu, Michael Pugliese, Douglas Manuel, Mohsen Sadatsafavi, Marcus Povitz, Therese A. Stukel, Teresa To, Shawn D. Aaron, Sunita Mulpuru, Melanie Chin, Claire E. Kendall, Kednapa Thavorn, Rebecca Robillard, Andrea S. Gershon

Abstract:

The impact of the COVID-19 pandemic on the management of ambulatory care sensitive conditions (ACSCs) remains unknown. To compare observed and expected (projected based on previous years) trends in all-cause mortality and healthcare use for ACSCs in the first year of the pandemic (March 2020 - March 2021). A population-based study using provincial health administrative data.General adult population (Ontario, Canada). Monthly all-cause mortality, and hospitalizations, emergency department (ED) and outpatient visit rates (per 100,000 people at-risk) for seven combined ACSCs (asthma, COPD, angina, congestive heart failure, hypertension, diabetes, and epilepsy) during the first year were compared with similar periods in previous years (2016-2019) by fitting monthly time series auto-regressive integrated moving-average models. Compared to previous years, all-cause mortality rates increased at the beginning of the pandemic (observed rate in March-May 2020 of 79.98 vs. projected of 71.24 [66.35-76.50]) and then returned to expected in June 2020—except among immigrants and people with mental health conditions where they remained elevated. Hospitalization and ED visit rates for ACSCs remained lower than projected throughout the first year: observed hospitalization rate of 37.29 vs. projected of 52.07 (47.84-56.68); observed ED visit rate of 92.55 vs. projected of 134.72 (124.89-145.33). ACSC outpatient visit rates decreased initially (observed rate of 4,299.57 vs. projected of 5,060.23 [4,712.64-5,433.46]) and then returned to expected in June 2020. Reductions in outpatient visits for ACSCs at the beginning of the pandemic combined with reduced hospital admissions may have been associated with temporally increased mortality—disproportionately experienced by immigrants and those with mental health conditions. The Ottawa Hospital Academic Medical Organization

Keywords: COVID-19, chronic disease, all-cause mortality, hospitalizations, emergency department visits, outpatient visits, modelling, population-based study, asthma, COPD, angina, heart failure, hypertension, diabetes, epilepsy

Procedia PDF Downloads 92
326 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0

Authors: Harris Niavis, Dimitra Politaki

Abstract:

The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.

Keywords: blockchain, data quality, industry4.0, product quality

Procedia PDF Downloads 189
325 Poisoning in Morocco: Evolution and Risk Factors

Authors: El Khaddam Safaa, Soulaymani Abdelmajid, Mokhtari Abdelghani, Ouammi Lahcen, Rachida Soulaymani-Beincheikh

Abstract:

The poisonings represent a problem of health in the world and Morocco, The exact dimensions of this phenomenon are still poorly recorded that we see the lack of exhaustive statistical data. The objective of this retrospective study of a series of cases of the poisonings declared at the level of the region of Tadla-Azilal and collected by the Moroccan Poison Control and Pharmacovigilance Center. An epidemiological profile of the poisonings was to raise, to determine the risk factors influencing the vital preview of the poisoned And to follow the evolution of the incidence, the lethality, and the mortality. During the period of study, we collected and analyzed 9303 cases of poisonings by different incriminated toxic products with the exception of the scorpion poisonings. These poisonings drove to 99 deaths. The epidemiological profile which we raised, showed that the poisoned were of any age with an average of 24.62±16.61 years, The sex-ratio (woman/man) was 1.36 in favor of the women. The difference between both sexes is highly significant (χ2 = 210.5; p<0,001). Most of the poisoned which declared to be of urban origin (60.5 %) (χ2=210.5; p<0,001). Carbon monoxide was the most incriminated among the cases of poisonings (24.15 %), them putting in head, followed by some pesticides and farm produces (21.44 %) and food (19.95 %). The analysis of the risk factors showed that the grown-up patients whose age is between 20 and 74 years have twice more risk of evolving towards the death (RR=1,57; IC95 % = 1,03-2,38) than the other age brackets, so the male genital organ was the most exposed (explained) to the death that the female genital organ (RR=1,59; IC95 % = 1,07-2,38) The patients of rural origin had presented 5 times more risk (RR=4,713; IC95 % = 2,543-8,742). Poisoned by the mineral products had presented the maximum of risk on the vital preview death (RR=23,19, IC95 % = 2,39-224,1). The poisonings by pesticides produce a risk of 9 (RR=9,31; IC95 % = 6,10-14,18). The incidence was 3,3 cases of 10000 inhabitants, and the mortality was 0,004 cases of 1000 inhabitants (that is 4 cases by 1000 000 inhabitants). The rate of lethality registered annually was 10.6 %. The evolution of the indicators of health according to the years showed that the rate of statement measured by the incidence increased by a significant way. We also noted an improvement in the coverage which (who) ended up with a decrease in the rate of the lethality and the mortality during last years. The fight anti-toxic is a work of length time. He asks for a lot of work various levels. It is necessary to attack the delay accumulated by our country on the various legal, institutional and technical aspects. The ideal solution is to develop and to set up a national strategy.

Keywords: epidemiology, poisoning, risk factors, indicators of health, Tadla-Azilal grated by anti-toxic fight

Procedia PDF Downloads 365
324 Olive Stone Valorization to Its Application on the Ceramic Industry

Authors: M. Martín-Morales, D. Eliche-Quesada, L. Pérez-Villarejo, M. Zamorano

Abstract:

Olive oil is a product of particular importance within the Mediterranean and Spanish agricultural food system, and more specifically in Andalusia, owing to be the world's main production area. Olive oil processing generates olive stones which are dried and cleaned to remove pulp and olive stones fines to produce biofuel characterized to have high energy efficiency in combustion processes. Olive stones fine fraction is not too much appreciated as biofuel, so it is important the study of alternative solutions to be valorized. Some researchers have studied recycling different waste to produce ceramic bricks. The main objective of this study is to investigate the effects of olive stones addition on the properties of fired clay bricks for building construction. Olive stones were substituted by volume (7.5%, 15%, and 25%) to brick raw material in three different sizes (lower than 1 mm, lower than 2 mm and between 1 and 2 mm). In order to obtain comparable results, a series without olive stones was also prepared. The prepared mixtures were compacted in laboratory type extrusion under a pressure of 2.5MPa for rectangular shaped (30 mm x 60 mm x 10 mm). Dried and fired industrial conditions were applied to obtain laboratory brick samples. Mass loss after sintering, bulk density, porosity, water absorption and compressive strength of fired samples were investigated and compared with a sample manufactured without biomass. Results obtained have shown that olive stone addition decreased mechanical properties due to the increase in water absorption, although values tested satisfied the requirements in EN 772-1 about methods of test for masonry units (Part 1: Determination of compressive strength). Finally, important advantages related to the properties of bricks as well as their environmental effects could be obtained with the use of biomass studied to produce ceramic bricks. The increasing of the percentage of olive stones incorporated decreased bulk density and then increased the porosity of bricks. On the one hand, this lower density supposes a weight reduction of bricks to be transported, handled as well as the lightening of building; on the other hand, biomass in clay contributes to auto thermal combustion which involves lower fuel consumption during firing step. Consequently, the production of porous clay bricks using olive stones could reduce atmospheric emissions and improve their life cycle assessment, producing eco-friendly clay bricks.

Keywords: clay bricks, olive stones, sustainability, valorization

Procedia PDF Downloads 152
323 Cai Guo-Qiang: A Chinese Artist at the Cutting-Edge of Global Art

Authors: Marta Blavia

Abstract:

Magiciens de la terre, organized in 1989 by the Centre Pompidou, became 'the first worldwide exhibition of contemporary art' by presenting artists from Western and non-Western countries, including three Chinese artists. For the first time, West turned its eyes to other countries not as exotic sources of inspiration, but as places where contemporary art was also being created. One year later, Chine: demain pour hier was inaugurated as the first Chinese avant-garde group-exhibition in Occident. Among the artists included was Cai Guo-Qiang who, like many other Chinese artists, had left his home country in the eighties in pursuit of greater creative freedom. By exploring artistic non-Western perspectives, both landmark exhibitions questioned the predominance of the Eurocentric vision in the construction of history art. But more than anything else, these exhibitions laid the groundwork for the rise of the so-called phenomenon 'global contemporary art'. All the same time, 1989 also was a turning point in Chinese art history. Because of the Tiananmen student protests, The Chinese government undertook a series of measures to cut down any kind of avant-garde artistic activity after a decade of a relative openness. During the eighties, and especially after the Tiananmen crackdown, some important artists began to leave China to move overseas such as Xu Bing and Ai Weiwei (USA); Chen Zhen and Huang Yong Ping (France); or Cai Guo-Qiang (Japan). After emigrating abroad, Chinese overseas artists began to develop projects in accordance with their new environments and audiences as well as to appear in numerous international exhibitions. With their creations, that moved freely between a variety of Eastern and Western art sources, these artists were crucial agents in the emergence of global contemporary art. As other Chinese artists overseas, Cai Guo-Qiang’s career took off during the 1990s and early 2000s right at the same moment in which Western art world started to look beyond itself. Little by little, he developed a very personal artistic language that redefines Chinese ideas, symbols, and traditional materials in a new world order marked by globalization. Cai Guo-Qiang participated in many of the exhibitions that contributed to shape global contemporary art: Encountering the Others (1992); the 45th Venice Biennale (1993); Inside Out: New Chinese Art (1997), or the 48th Venice Biennale (1999), where he recreated the Chinese monumental social realist work Rent Collection Courtyard that earned him the Golden Lion Award. By examining the different stages of Cai Guo-Qiang’s artistic path as well as the transnational dimensions of his creations, this paper aims at offering a comprehensive survey on the construction of the discourse of global contemporary art.

Keywords: Cai Guo-Qiang, Chinese artists overseas, emergence global art, transnational art

Procedia PDF Downloads 284
322 Enhancing Athlete Training using Real Time Pose Estimation with Neural Networks

Authors: Jeh Patel, Chandrahas Paidi, Ahmed Hambaba

Abstract:

Traditional methods for analyzing athlete movement often lack the detail and immediacy required for optimal training. This project aims to address this limitation by developing a Real-time human pose estimation system specifically designed to enhance athlete training across various sports. This system leverages the power of convolutional neural networks (CNNs) to provide a comprehensive and immediate analysis of an athlete’s movement patterns during training sessions. The core architecture utilizes dilated convolutions to capture crucial long-range dependencies within video frames. Combining this with the robust encoder-decoder architecture to further refine pose estimation accuracy. This capability is essential for precise joint localization across the diverse range of athletic poses encountered in different sports. Furthermore, by quantifying movement efficiency, power output, and range of motion, the system provides data-driven insights that can be used to optimize training programs. Pose estimation data analysis can also be used to develop personalized training plans that target specific weaknesses identified in an athlete’s movement patterns. To overcome the limitations posed by outdoor environments, the project employs strategies such as multi-camera configurations or depth sensing techniques. These approaches can enhance pose estimation accuracy in challenging lighting and occlusion scenarios, where pose estimation accuracy in challenging lighting and occlusion scenarios. A dataset is collected From the labs of Martin Luther King at San Jose State University. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing different poses, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced pose detection model and lays the groundwork for future innovations in assistive enhancement technologies.

Keywords: computer vision, deep learning, human pose estimation, U-NET, CNN

Procedia PDF Downloads 55
321 Deep Learning for Image Correction in Sparse-View Computed Tomography

Authors: Shubham Gogri, Lucia Florescu

Abstract:

Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.

Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net

Procedia PDF Downloads 161
320 Predictive Modelling of Curcuminoid Bioaccessibility as a Function of Food Formulation and Associated Properties

Authors: Kevin De Castro Cogle, Mirian Kubo, Maria Anastasiadi, Fady Mohareb, Claire Rossi

Abstract:

Background: The bioaccessibility of bioactive compounds is a critical determinant of the nutritional quality of various food products. Despite its importance, there is a limited number of comprehensive studies aimed at assessing how the composition of a food matrix influences the bioaccessibility of a compound of interest. This knowledge gap has prompted a growing need to investigate the intricate relationship between food matrix formulations and the bioaccessibility of bioactive compounds. One such class of bioactive compounds that has attracted considerable attention is curcuminoids. These naturally occurring phytochemicals, extracted from the roots of Curcuma longa, have gained popularity owing to their purported health benefits and also well known for their poor bioaccessibility Project aim: The primary objective of this research project is to systematically assess the influence of matrix composition on the bioaccessibility of curcuminoids. Additionally, this study aimed to develop a series of predictive models for bioaccessibility, providing valuable insights for optimising the formula for functional foods and provide more descriptive nutritional information to potential consumers. Methods: Food formulations enriched with curcuminoids were subjected to in vitro digestion simulation, and their bioaccessibility was characterized with chromatographic and spectrophotometric techniques. The resulting data served as the foundation for the development of predictive models capable of estimating bioaccessibility based on specific physicochemical properties of the food matrices. Results: One striking finding of this study was the strong correlation observed between the concentration of macronutrients within the food formulations and the bioaccessibility of curcuminoids. In fact, macronutrient content emerged as a very informative explanatory variable of bioaccessibility and was used, alongside other variables, as predictors in a Bayesian hierarchical model that predicted curcuminoid bioaccessibility accurately (optimisation performance of 0.97 R2) for the majority of cross-validated test formulations (LOOCV of 0.92 R2). These preliminary results open the door to further exploration, enabling researchers to investigate a broader spectrum of food matrix types and additional properties that may influence bioaccessibility. Conclusions: This research sheds light on the intricate interplay between food matrix composition and the bioaccessibility of curcuminoids. This study lays a foundation for future investigations, offering a promising avenue for advancing our understanding of bioactive compound bioaccessibility and its implications for the food industry and informed consumer choices.

Keywords: bioactive bioaccessibility, food formulation, food matrix, machine learning, probabilistic modelling

Procedia PDF Downloads 68
319 In Support of Sustainable Water Resources Development in the Lower Mekong River Basin: Development of Guidelines for Transboundary Environmental Impact Assessment

Authors: Kongmeng Ly

Abstract:

The management of transboundary river basins across developing countries, such as the Lower Mekong River Basin (LMB), is frequently challenging given the development and conservation divergences of the basin countries. Driven by needs to sustain economic performance and reduce poverty, the LMB countries (Cambodia, Lao PDR, Thailand, Viet Nam) are embarking on significant land use changes in the form hydropower dam, to fulfill their energy requirements. This pathway could lead to irreversible changes to the ecosystem of the Mekong River, if not properly managed. Given the uncertain trade-offs of hydropower development and operation, the Lower Mekong River Basin Countries through the technical support of the Mekong River Commission (MRC) Secretariat embarked on decade long the development of Technical Guidelines for Transboundary Environmental Impact Assessment. Through a series of workshops, seminars, national and regional consultations, and pilot studies and further development following the recommendations generated through legal and institutional reviews undertaken over two decades period, the LMB Countries jointly adopted the MRC Technical Guidelines for Transboundary Environmental Impact Assessment (TbEIA Guidelines). These guidelines were developed with particular regard to the experience gained from MRC supported consultations and technical reviews of the Xayaburi Dam Project, Don Sahong Hydropower Project, Pak Beng Hydropower Project, and lessons learned from the Srepok River and Se San River case studies commissioned by the MRC under the generous supports of development partners around the globe. As adopted, the TbEIA Guidelines have been designed as a supporting mechanism to the national EIA legislation, processes and systems in each Member Country. In recognition of the already agreed mechanisms, the TbEIA Guidelines build on and supplement the agreements stipulated in the 1995 Agreement on the Cooperation for the Sustainable Development of the Mekong River Basin and its Procedural Rules, in addressing potential transboundary environmental impacts of development projects and ensuring mutual benefits from the Mekong River and its resources. Since its adoption in 2022, the TbEIA Guidelines have already been voluntary implemented by Lao PDR on its underdevelopment Sekong A Downstream Hydropower Project, located on the Sekong River – a major tributary of the Mekong River. While this implementation is ongoing with results expected in early 2024, the implementation thus far has strengthened cooperation among concerned Member Countries with multiple successful open dialogues organized at national and regional levels. It is hope that lessons learnt from this application would lead to a wider application of the TbEIA Guidelines for future water resources development projects in the LMB.

Keywords: transboundary, EIA, lower mekong river basin, mekong river

Procedia PDF Downloads 37
318 Use of Satellite Altimetry and Moderate Resolution Imaging Technology of Flood Extent to Support Seasonal Outlooks of Nuisance Flood Risk along United States Coastlines and Managed Areas

Authors: Varis Ransibrahmanakul, Doug Pirhalla, Scott Sheridan, Cameron Lee

Abstract:

U.S. coastal areas and ecosystems are facing multiple sea level rise threats and effects: heavy rain events, cyclones, and changing wind and weather patterns all influence coastal flooding, sedimentation, and erosion along critical barrier islands and can strongly impact habitat resiliency and water quality in protected habitats. These impacts are increasing over time and have accelerated the need for new tracking techniques, models and tools of flood risk to support enhanced preparedness for coastal management and mitigation. To address this issue, NOAA National Ocean Service (NOS) evaluated new metrics from satellite altimetry AVISO/Copernicus and MODIS IR flood extents to isolate nodes atmospheric variability indicative of elevated sea level and nuisance flood events. Using de-trended time series of cross-shelf sea surface heights (SSH), we identified specific Self Organizing Maps (SOM) nodes and transitions having a strongest regional association with oceanic spatial patterns (e.g., heightened downwelling favorable wind-stress and enhanced southward coastal transport) indicative of elevated coastal sea levels. Results show the impacts of the inverted barometer effect as well as the effects of surface wind forcing; Ekman-induced transport along broad expanses of the U.S. eastern coastline. Higher sea levels and corresponding localized flooding are associated with either pattern indicative of enhanced on-shore flow, deepening cyclones, or local- scale winds, generally coupled with an increased local to regional precipitation. These findings will support an integration of satellite products and will inform seasonal outlook model development supported through NOAAs Climate Program Office and NOS office of Center for Operational Oceanographic Products and Services (CO-OPS). Overall results will prioritize ecological areas and coastal lab facilities at risk based on numbers of nuisance flood projected and inform coastal management of flood risk around low lying areas subjected to bank erosion.

Keywords: AVISO satellite altimetry SSHA, MODIS IR flood map, nuisance flood, remote sensing of flood

Procedia PDF Downloads 143
317 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach

Authors: Kristina Pflug, Markus Busch

Abstract:

Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.

Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology

Procedia PDF Downloads 124
316 Strategies for Improving and Sustaining Quality in Higher Education

Authors: Anshu Radha Aggarwal

Abstract:

Higher Education (HE) in the India has experienced a series of remarkable changes over the last fifteen years as successive governments have sought to make the sector more efficient and more accountable for investment of public funds. Rapid expansion in student numbers and pressures to widen Participation amongst non-traditional students are key challenges facing HE. Learning outcomes can act as a benchmark for assuring quality and efficiency in HE and they also enable universities to describe courses in an unambiguous way so as to demystify (and open up) education to a wider audience. This paper examines how learning outcomes are used in HE and evaluates the implications for curriculum design and student learning. There has been huge expansion in the field of higher education, both technical and non-technical, in India during the last two decades, and this trend is continuing. It is expected that another about 400 colleges and 300 universities will be created by the end of the 13th Plan Period. This has lead to many concerns about the quality of education and training of our students. Many studies have brought the issues ailing our curricula, delivery, monitoring and assessment. Govt. of India, (via MHRD, UGC, NBA,…) has initiated several steps to bring improvement in quality of higher education and training, such as National Skills Qualification Framework, making accreditation of institutions mandatory in order to receive Govt. grants, and so on. Moreover, Outcome-based Education and Training (OBET) has also been mandated and encouraged in the teaching/learning institutions. MHRD, UGC and NBAhas made accreditation of schools, colleges and universities mandatory w.e.f Jan 2014. Outcome-based Education and Training (OBET) approach is learner-centric, whereas the traditional approach has been teacher-centric. OBET is a process which involves the re-orientation/restructuring the curriculum, implementation, assessment/measurements of educational goals, and achievement of higher order learning, rather than merely clearing/passing the university examinations. OBET aims to bring about these desired changes within the students, by increasing knowledge, developing skills, influencing attitudes and creating social-connect mind-set. This approach has been adopted by several leading universities and institutions around the world in advanced countries. Objectives of this paper is to highlight the issues concerning quality in higher education and quality frameworks, to deliberate on the various education and training models, to explain the outcome-based education and assessment processes, to provide an understanding of the NAAC and outcome-based accreditation criteria and processes and to share best-practice outcomes-based accreditation system and process.

Keywords: learning outcomes, curriculum development, pedagogy, outcome based education

Procedia PDF Downloads 523
315 Geological Characteristics and Hydrocarbon Potential of M’Rar Formation Within NC-210, Atshan Saddle Ghadamis-Murzuq Basins, Libya

Authors: Sadeg M. Ghnia, Mahmud Alghattawi

Abstract:

The NC-210 study area is located in Atshan Saddle between both Ghadamis and Murzuq basins, west Libya. The preserved Palaeozoic successions are predominantly clastics reaching thickness of more than 20,000 ft in northern Ghadamis Basin depocenter. The Carboniferous series consist of interbedded sandstone, siltstone, shale, claystone and minor limestone deposited in a fluctuating shallow marine to brackish lacustrine/fluviatile environment which attain maximum thickness of over 5,000ft in the area of Atshan Saddle and recorded 3,500 ft. in outcrops of Murzuq Basin flanks. The Carboniferous strata was uplifted and eroded during Late Paleozoic and early Mesozoic time in northern Ghadamis Basin and Atshan Saddle. The M'rar Formation age is Tournaisian to Late Serpukhovian based on palynological markers and contains about 12 cycles of sandstone and shale deposited in shallow to outer neritic deltaic settings. The hydrocarbons in the M'rar reservoirs possibly sourced from the Lower Silurian and possibly Frasinian radioactive hot shales. The M'rar Formation lateral, vertical and thickness distribution is possibly influenced by the reactivation of Tumarline Strik-Slip fault and its conjugate faults. A pronounced structural paleohighs and paleolows, trending SE & NW through the Gargaf Saddle, is possibly indicative of the present of two sub-basins in the area of Atshan Saddle. A number of identified seismic reflectors from existing 2D seismic covering Atshan Saddle reflect M’rar deltaic 12 sandstone cycles. M’rar7, M’rar9, M’rar10 and M’rar12 are characterized by high amplitude reflectors, while M’rar2 and M’rar6 are characterized by medium amplitude reflectors. These horizons are productive reservoirs in the study area. Available seismic data in the study area contributed significantly to the identification of M’rar potential traps, which are prominently 3- way dip closure against fault zone. Also seismic data indicates the presence of a significant strikeslip component with the development of flower-structure. The M'rar Formation hydrocarbon discoveries are concentrated mainly in the Atshan Saddle located in southern Ghadamis Basin, Libya and Illizi Basin in southeast of Algeria. Significant additional hydrocarbons may be present in areas adjacent to the Gargaf Uplift, along structural highs and fringing the Hoggar Uplift, providing suitable migration pathways.

Keywords: hydrocarbon potential, stratigraphy, Ghadamis basin, seismic, well data integration

Procedia PDF Downloads 74
314 Policy Initiatives That Increase Mass-Market Participation of Fuel Cell Electric Vehicles

Authors: Usman Asif, Klaus Schmidt

Abstract:

In recent years, the development of alternate fuel vehicles has helped to reduce carbon emissions worldwide. As the number of vehicles will continue to increase in the future, the energy demand will also increase. Therefore, we must consider automotive technologies that are efficient and less harmful to the environment in the long run. Battery Electric Vehicles (BEVs) have gained popularity in recent years because of their lower maintenance, lower fuel costs, and lower carbon emissions. Nevertheless, BEVs show several disadvantages, such as slow charging times and lower range than traditional combustion-powered vehicles. These factors keep many people from switching to BEVs. The authors of this research believe that these limitations can be overcome by using fuel cell technology. Fuel cell technology converts chemical energy into electrical energy from hydrogen power and therefore serves as fuel to power the motor and thus replacing heavy lithium batteries that are expensive and hard to recycle. Also, in contrast to battery-powered electric vehicle technology, Fuel Cell Electric Vehicles (FCEVs) offer higher ranges and lower fuel-up times and therefore are more competitive with electric vehicles. However, FCEVs have not gained the same popularity as electric vehicles due to stringent legal frameworks, underdeveloped infrastructure, high fuel transport, and storage costs plus the expense of fuel cell technology itself. This research will focus on the legal frameworks for hydrogen-powered vehicles, and how a change in these policies may affect and improve hydrogen fueling infrastructure and lower hydrogen transport and storage costs. These policies may also facilitate reductions in fuel cell technology costs. In order to attain a better framework, a number of countries have developed conceptual roadmaps. These roadmaps have set out a series of objectives to increase the access of FCEVs to their respective markets. This research will specifically focus on policies in Japan, Europe, and the USA in their attempt to shape the automotive industry of the future. The researchers also suggest additional policies that may help to accelerate the advancement of FCEVs to mass-markets. The approach was to provide a solid literature review using resources from around the globe. After a subsequent analysis and synthesis of this review, the authors concluded that in spite of existing legal challenges that have hindered the advancement of fuel-cell technology in the automobile industry in the past, new initiatives that enhance and advance the very same technology in the future are underway.

Keywords: fuel cell electric vehicles, fuel cell technology, legal frameworks, policies and regulations

Procedia PDF Downloads 117
313 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data

Authors: S. Jurado, E. Pazmino

Abstract:

Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.

Keywords: medial axis, pore-throat distribution, porosity, porous media

Procedia PDF Downloads 115
312 Conservative and Surgical Treatment of Antiresorptive Drug-Related Osteonecrosis of the Jaw with Ultrasonic Piezoelectric Bone Surgery under Polyvinylpyrrolidone Iodine Irrigation: A Case Series of 13 Treated Sites

Authors: Esra Yuce, Isil D. S. Yamaner, Murude Yazan

Abstract:

Aims and objective: Antiresorptive agents including bisphosphonates and denosumab as strong suppressors of osteoclasts are the most commonly used antiresorptive medications for the treatment of osteoporosis which counteract the negative quantitative alteration of trabecular and cortical bone by inhibition of bone turnover. Oral bisphosphonate therapy for the treatment of osteopenia, osteoporosis or Paget's disease is associated with the low-grade risk of osteonecrosis of the jaw, while higher-grade risk is associated with receiving intravenous bisphosphonates therapy in the treatment of multiple myeloma and bone metastases. On the other hand, there has been a remarkable increase in incidences of antiresorptive related osteonecrosis of the jaw (ARONJ) in oral bisphosphonate users. This clinical presentation will evaluate the healing outcomes via piezoelectric bone surgery under the irrigation of PVP-I solution irrigation in patients received bisphosphonate therapy. Material-Method: The study involved 8 female and 5 male patients that have been treated for ARONJ. Among 13 necrotic sites, 9 were in the mandible and 4 were in the maxilla. All of these 13 patients treated with surgical debridement via piezoelectric bone surgery under irrigation by solution with 3% PVP-I concentration in combination with long-term antibiotic therapy and 5 also underwent removal of mobile segments of bony sequestrum. All removable prosthesis in 8 patients were relined with soft liners during the healing periods in order to eliminate chronic minor traumas. Results: All patients were on oral bisphosphonate therapy for at least 2 years and 5 of which had received intravenous bisphosphonates up to 1 year before therapy with oral bisphosphonates was started. According to the AAOMS staging system, four cases were stage II, eight cases were stage I, and one case was stage III. The majority of lesions were identified at sites of dental prostheses (38%) and dental extractions (62%). All patients diagnosed with ARONJ stage I had used unadjusted removable prostheses. No recurrence of the symptoms was observed during the present follow-up (9–37 months). Conclusion: Despite their confirmed effectiveness, the prevention and treatment of osteonecrosis of the jaw secondary to oral bisphosphonate therapy remain major medical challenges. Treatment with piezoelectric bone surgery with irrigation of povidone-iodine solution was effective for management of bisphosphonate-related osteonecrosis of the jaw. Taking precautions for patients treated with oral bisphosphonates, especially also denture users, may allow for a reduction in the rate of developing osteonecrosis of the maxillofacial region.

Keywords: antiresorptive drug related osteonecrosis, bisphosphonate therapy, piezoelectric bone surgery, povidone iodine

Procedia PDF Downloads 265
311 Cognitive Performance and Everyday Functionality in Healthy Greek Seniors

Authors: George Pavlidis, Ana Vivas

Abstract:

The demographic change into an aging population has stimulated the examination of seniors’ mental health and ability to live independently. The corresponding literature depicts the relation between cognitive decline and everyday functionality with aging, focusing largely in individuals that are reaching or have bridged the threshold of various forms of neuropathology and disability. In this context, recent meta-analysis depicts a moderate relation between cognitive performance and everyday functionality in AD sufferers. However, there has not been an analogous effort for the examination of this relation in the healthy spectrum of aging (i.e, in samples that are not challenged from a neurodegenerative disease). There is a consensus that the assessment tools designed to detect neuropathology with those that assess cognitive performance in healthy adults are distinct, thus their universal use in cognitively challenged and in healthy adults is not always valid. The same accounts for the assessment of everyday functionality. In addition, it is argued that everyday functionality should be examined with cultural adjusted assessment tools, since many vital everyday tasks are heterotypical among distinct cultures. Therefore, this study was set out to examine the relation between cognitive performance and everyday functionality a) in the healthy spectrum of aging and b) by adjusting the everyday functionality tools EPT and OTDL-R in the Greek cultural context. In Greece, 107 cognitively healthy seniors ( Mage = 62.24) completed a battery of neuropsychological tests and everyday functionality tests. Both were carefully chosen to be sensitive in fluctuations of performance in the healthy spectrum of cognitive performance and everyday functionality. The everyday functionality assessment tools were modified to reflect the local cultural context (i.e., EPT-G and OTDL-G). The results depicted that performance in all everyday functionality measures decline with age (.197 < r > .509). Statistically significant correlations emerged between cognitive performance and everyday functionality assessments that range from r =0.202 to r=0.510. A series of independent regression analysis including the scores of cognitive assessments has yield statistical significant models that explained 20.9 < AR2 > 32.4 of the variance in everyday functionality scored indexes. All everyday functionality measures were independently predicted by the TMT B-A index, and indicator of executive function. Stepwise regression analyses depicted that TMT B-A and age were statistically significant independent predictors of EPT-G and OTDL-G. It was concluded that everyday functionality is declining with age and that cognitive performance and everyday functional may be related in the healthy spectrum of aging. Age seems not to be the sole contributing factor in everyday functionality decline, rather executive control as well. Moreover, it was concluded that the EPT-G and OTDL-G are valuable tools to assess everyday functionality in Greek seniors that are not cognitively challenged, especially for research purposes. Future research should examine the contributing factors of a better cognitive vitality especially in executive control, as vital for the maintenance of independent living capacity with aging.

Keywords: cognition, everyday functionality, aging, cognitive decline, healthy aging, Greece

Procedia PDF Downloads 523
310 Management of Caverno-Venous Leakage: A Series of 133 Patients with Symptoms, Hemodynamic Workup, and Results of Surgery

Authors: Allaire Eric, Hauet Pascal, Floresco Jean, Beley Sebastien, Sussman Helene, Virag Ronald

Abstract:

Background: Caverno-venous leakage (CVL) is devastating, although barely known disease, the first cause of major physical impairment in men under 25, and responsible for 50% of resistances to phosphodiesterase 5-inhibitors (PDE5-I), affecting 30 to 40% of users in this medication class. In this condition, too early blood drainage from corpora cavernosa prevents penile rigidity and penetration during sexual intercourse. The role of conservative surgery in this disease remains controversial. Aim: Assess complications and results of combined open surgery and embolization for CVL. Method: Between June 2016 and September 2021, 133 consecutive patients underwent surgery in our institution for CVL, causing severe erectile dysfunction (ED) resistance to oral medical treatment. Procedures combined vein embolization and ligation with microsurgical techniques. We performed a pre-and post-operative clinical (Erection Harness Scale: EHS) hemodynamic evaluation by duplex sonography in all patients. Before surgery, the CVL network was visualized by computed tomography cavernography. Penile EMG was performed in case of diabetes or suspected other neurological conditions. All patients were optimized for hormonal status—data we prospectively recorded. Results: Clinical signs suggesting CVL were ED since age lower than 25, loss of erection when changing position, penile rigidity varying according to the position. Main complications were minor pulmonary embolism in 2 patients, one after airline travel, one with Factor V Leiden heterozygote mutation, one infection and three hematomas requiring reoperation, one decreased gland sensitivity lasting for more than one year. Mean pre-operative pharmacologic EHS was 2.37+/-0.64, mean pharmacologic post-operative EHS was 3.21+/-0.60, p<0.0001 (paired t-test). The mean EHS variation was 0.87+/-0.74. After surgery, 81.5% of patients had a pharmacologic EHS equal to or over 3, allowing for intercourse with penetration. Three patients (2.2%) experienced lower post-operative EHS. The main cause of failure was leakage from the deep dorsal aspect of the corpus cavernosa. In a 14 months follow-up, 83.2% of patients had a clinical EHS equal to or over 3, allowing for sexual intercourse with penetration, one-third of them without any medication. 5 patients had a penile implant after unsuccessful conservative surgery. Conclusion: Open surgery combined with embolization for CVL is an efficient approach to CVL causing severe erectile dysfunction.

Keywords: erectile dysfunction, cavernovenous leakage, surgery, embolization, treatment, result, complications, penile duplex sonography

Procedia PDF Downloads 149
309 Downward Vertical Evacuation for Disabilities People from Tsunami Using Escape Bunker Technology

Authors: Febrian Tegar Wicaksana, Niqmatul Kurniati, Surya Nandika

Abstract:

Indonesia is one of the countries that have great number of disaster occurrence and threat because it is located in not only between three tectonic plates such as Eurasia plates, Indo-Australia plates and Pacific plates, but also in the Ring of Fire path, like earthquake, Tsunami, volcanic eruption and many more. Recently, research shows that there are potential areas that will be devastated by Tsunami in southern coast of Java. Tsunami is a series of waves in a body of water caused by the displacement of a large volume of water, generally in an ocean. When the waves enter shallow water, they may rise to several feet or, in rare cases, tens of feet, striking the coast with devastating force. The parameter for reference such as magnitude, the depth of epicentre, distance between epicentres with land, the depth of every points, when reached the shore and the growth of waves. Interaction between parameters will bring the big variance of Tsunami wave. Based on that, we can formulate preparation that needed for disaster mitigation strategies. The mitigation strategies will take the important role in an effort to reduce the number of victims and damage in the area. It will reduce the number of victim and casualties. Reducing is directed to the most difficult mobilization casualties in the tsunami disaster area like old people, sick people and disabilities people. Until now, the method that used for rescuing people from Tsunami is basic horizontal evacuation. This evacuation system is not optimal because it needs so long time and it cannot be used by people with disabilities. The writers propose to create a vertical evacuation model with an escape bunker system. This bunker system is chosen because the downward vertical evacuation is considered more efficient and faster. Especially in coastal areas without any highlands surround it. The downward evacuation system is better than upward evacuation because it can avoid the risk of erosion at the ground around the structure which can affect the building. The structure of the bunker and the evacuation process while, and even after, disaster are the main priority to be considered. The power of bunker has quake’s resistance, the durability from water stream, variety of interaction to the ground, and waterproof design. When the situation is back to normal, victim and casualties can go into the safer place. The bunker will be located near the hospital and public places, and will have wide entrance supported by large slide in it so it will ease the disabilities people. The technology of the escape bunker system is expected to reduce the number of victims who have low mobility in the Tsunami.

Keywords: escape bunker, tsunami, vertical evacuation, mitigation, disaster management

Procedia PDF Downloads 492
308 Environmental Performance of Different Lab Scale Chromium Removal Processes

Authors: Chiao-Cheng Huang, Pei-Te Chiueh, Ya-Hsuan Liou

Abstract:

Chromium-contaminated wastewater from electroplating industrial activity has been a long-standing environmental issue, as it can degrade surface water quality and is harmful to soil ecosystems. The traditional method of treating chromium-contaminated wastewater has been to use chemical coagulation processes. However, this method consumes large amounts of chemicals such as sulfuric acid, sodium hydroxide, and sodium bicarbonate in order to remove chromium. However, a series of new methods for treating chromium-containing wastewater have been developed. This study aimed to compare the environmental impact of four different lab scale chromium removal processes: 1.) chemical coagulation process (the most common and traditional method), in which sodium metabisulfite was used as reductant, 2.) electrochemical process using two steel sheets as electrodes, 3.) reduction by iron-copper bimetallic powder, and 4.) photocatalysis process by TiO2. Each process was run in the lab, and was able to achieve 100% removal of chromium in solution. Then a Life Cycle Assessment (LCA) study was conducted based on the experimental data obtained from four different case studies to identify the environmentally preferable alternative to treat chromium wastewater. The model used for calculating the environmental impact was TRACi, and the system scope includes the production phase and use phase of chemicals and electricity consumed by the chromium removal processes, as well as the final disposal of chromium containing sludge. The functional unit chosen in this study was the removal of 1 mg of chromium. Solution volume of each case study was adjusted to 1 L in advance and the chemicals and energy consumed were proportionally adjusted. The emissions and resources consumed were identified and characterized into 15 categories of midpoint impacts. The impact assessment results show that the human ecotoxicity category accounts for 55 % of environmental impact in Case 1, which can be attributed to the sulfuric acid used for pH adjustment. In Case 2, production of steel sheet electrodes is an energy-intensive process, thus contributed to 20 % of environmental impact. In Case 3, sodium bicarbonate is used as an anti-corrosion additive, which results mainly in 1.02E-05 Comparative Toxicity Unit (CTU) in the human toxicity category and 0.54E-05 (CTU) in acidification of air. In Case 4, electricity consumption for power supply of UV lamp gives 5.25E-05 (CTU) in human toxicity category, 1.15E-05 (kg Neq) in eutrophication. In conclusion, Case 3 and Case 4 have higher environmental impacts than Case 1 and Case 2, which can be attributed mostly to higher energy and chemical consumption, leading to high impacts in the global warming and ecotoxicity categories.

Keywords: chromium, lab scale, life cycle assessment, wastewater

Procedia PDF Downloads 265
307 A Regional Analysis on Co-movement of Sovereign Credit Risk and Interbank Risks

Authors: Mehdi Janbaz

Abstract:

The global financial crisis and the credit crunch that followed magnified the importance of credit risk management and its crucial role in the stability of all financial sectors and the whole of the system. Many believe that risks faced by the sovereign sector are highly interconnected with banking risks and most likely to trigger and reinforce each other. This study aims to examine (1) the impact of banking and interbank risk factors on the sovereign credit risk of Eurozone, and (2) how the EU Credit Default Swaps spreads dynamics are affected by the Crude Oil price fluctuations. The hypothesizes are tested by employing fitting risk measures and through a four-staged linear modeling approach. The sovereign senior 5-year Credit Default Swap spreads are used as a core measure of the credit risk. The monthly time-series data of the variables used in the study are gathered from the DataStream database for a period of 2008-2019. First, a linear model test the impact of regional macroeconomic and market-based factors (STOXX, VSTOXX, Oil, Sovereign Debt, and Slope) on the CDS spreads dynamics. Second, the bank-specific factors, including LIBOR-OIS spread (the difference between the Euro 3-month LIBOR rate and Euro 3-month overnight index swap rates) and Euribor, are added to the most significant factors of the previous model. Third, the global financial factors including EURO to USD Foreign Exchange Volatility, TED spread (the difference between 3-month T-bill and the 3-month LIBOR rate based in US dollars), and Chicago Board Options Exchange (CBOE) Crude Oil Volatility Index are added to the major significant factors of the first two models. Finally, a model is generated by a combination of the major factor of each variable set in addition to the crisis dummy. The findings show that (1) the explanatory power of LIBOR-OIS on the sovereign CDS spread of Eurozone is very significant, and (2) there is a meaningful adverse co-movement between the Crude Oil price and CDS price of Eurozone. Surprisingly, adding TED spread (the difference between the three-month Treasury bill and the three-month LIBOR based in US dollars.) to the analysis and beside the LIBOR-OIS spread (the difference between the Euro 3M LIBOR and Euro 3M OIS) in third and fourth models has been increased the predicting power of LIBOR-OIS. Based on the results, LIBOR-OIS, Stoxx, TED spread, Slope, Oil price, OVX, FX volatility, and Euribor are the determinants of CDS spreads dynamics in Eurozone. Moreover, the positive impact of the crisis period on the creditworthiness of the Eurozone is meaningful.

Keywords: CDS, crude oil, interbank risk, LIBOR-OIS, OVX, sovereign credit risk, TED

Procedia PDF Downloads 144
306 Production of Bricks Using Mill Waste and Tyre Crumbs at a Low Temperature by Alkali-Activation

Authors: Zipeng Zhang, Yat C. Wong, Arul Arulrajah

Abstract:

Since automobiles became widely popular around the early 20th century, end-of-life tyres have been one of the major types of waste humans encounter. Every minute, there are considerable quantities of tyres being disposed of around the world. Most end-of-life tyres are simply landfilled or simply stockpiled, other than recycling. To address the potential issues caused by tyre waste, incorporating it into construction materials can be a possibility. This research investigated the viability of manufacturing bricks using mill waste and tyre crumb by alkali-activation at a relatively low temperature. The mill waste was extracted from a brick factory located in Melbourne, Australia, and the tyre crumbs were supplied by a local recycling company. As the main precursor, the mill waste was activated by the alkaline solution, which was comprised of sodium hydroxide (8m) and sodium silicate (liquid). The introduction ratio of alkaline solution (relative to the solid weight) and the weight ratio between sodium hydroxide and sodium silicate was fixed at 20 wt.% and 1:1, respectively. The tyre crumb was introduced to substitute part of the mill waste at four ratios by weight, namely 0, 5, 10 and 15%. The mixture of mill waste and tyre crumbs were firstly dry-mixed for 2 min to ensure the homogeneity, followed by a 2.5-min wet mixing after adding the solution. The ready mixture subsequently was press-moulded into blocks with the size of 109 mm in length, 112.5 mm in width and 76 mm in height. The blocks were cured at 50°C with 95% relative humidity for 2 days, followed by a 110°C oven-curing for 1 day. All the samples were then placed under the ambient environment until the age of 7 and 28 days for testing. A series of tests were conducted to evaluate the linear shrinkage, compressive strength and water absorption of the samples. In addition, the microstructure of the samples was examined via the scanning electron microscope (SEM) test. The results showed the highest compressive strength was 17.6 MPa, found in the 28-day-old group using 5 wt.% tyre crumbs. Such strength has been able to satisfy the requirement of ASTM C67. However, the increasing addition of tyre crumb weakened the compressive strength of samples. Apart from the strength, the linear shrinkage and water absorption of all the groups can meet the requirements of the standard. It is worth noting that the use of tyre crumbs tended to decrease the shrinkage and even caused expansion when the tyre content was over 15 wt.%. The research also found that there was a significant reduction in compressive strength for the samples after water absorption tests. In conclusion, the tyre crumbs have the potential to be used as a filler material in brick manufacturing, but more research needs to be done to tackle the durability problem in the future.

Keywords: bricks, mill waste, tyre crumbs, waste recycling

Procedia PDF Downloads 122
305 Development of a Bus Information Web System

Authors: Chiyoung Kim, Jaegeol Yim

Abstract:

Bus service is often either main or the only public transportation available in cities. In metropolitan areas, both subways and buses are available whereas in the medium sized cities buses are usually the only type of public transportation available. Bus Information Systems (BIS) provide current locations of running buses, efficient routes to travel from one place to another, points of interests around a given bus stop, a series of bus stops consisting of a given bus route, and so on to users. Thanks to BIS, people do not have to waste time at a bus stop waiting for a bus because BIS provides exact information on bus arrival times at a given bus stop. Therefore, BIS does a lot to promote the use of buses contributing to pollution reduction and saving natural resources. BIS implementation costs a huge amount of budget as it requires a lot of special equipment such as road side equipment, automatic vehicle identification and location systems, trunked radio systems, and so on. Consequently, medium and small sized cities with a low budget cannot afford to install BIS even though people in these cities need BIS service more desperately than people in metropolitan areas. It is possible to provide BIS service at virtually no cost under the assumption that everybody carries a smartphone and there is at least one person with a smartphone in a running bus who is willing to reveal his/her location details while he/she is sitting in a bus. This assumption is usually true in the real world. The smartphone penetration rate is greater than 100% in the developed countries and there is no reason for a bus driver to refuse to reveal his/her location details while driving. We have developed a mobile app that periodically reads values of sensors including GPS and sends GPS data to the server when the bus stops or when the elapsed time from the last send attempt is greater than a threshold. This app detects the bus stop state by investigating the sensor values. The server that receives GPS data from this app has also been developed. Under the assumption that the current locations of all running buses collected by the mobile app are recorded in a database, we have also developed a web site that provides all kinds of information that most BISs provide to users through the Internet. The development environment is: OS: Windows 7 64bit, IDE: Eclipse Luna 4.4.1, Spring IDE 3.7.0, Database: MySQL 5.1.7, Web Server: Apache Tomcat 7.0, Programming Language: Java 1.7.0_79. Given a start and a destination bus stop, it finds a shortest path from the start to the destination using the Dijkstra algorithm. Then, it finds a convenient route considering number of transits. For the user interface, we use the Google map. Template classes that are used by the Controller, DAO, Service and Utils classes include BUS, BusStop, BusListInfo, BusStopOrder, RouteResult, WalkingDist, Location, and so on. We are now integrating the mobile app system and the web app system.

Keywords: bus information system, GPS, mobile app, web site

Procedia PDF Downloads 216
304 Finite Element Analysis of a Glass Facades Supported by Pre-Tensioned Cable Trusses

Authors: Khair Al-Deen Bsisu, Osama Mahmoud Abuzeid

Abstract:

Significant technological advances have been achieved in the design and building construction of steel and glass in the last two decades. The metal glass support frame has been replaced by further sophisticated technological solutions, for example, the point fixed glazing systems. The minimization of the visual mass has reached extensive possibilities through the evolution of technology in glass production and the better understanding of the structural potential of glass itself, the technological development of bolted fixings, the introduction of the glazing support attachments of the glass suspension systems and the use for structural stabilization of cables that reduce to a minimum the amount of metal used. The variability of solutions of tension structures, allied to the difficulties related to geometric and material non-linear behavior, usually overrules the use of analytical solutions, letting numerical analysis as the only general approach to the design and analysis of tension structures. With the characteristics of low stiffness, lightweight, and small damping, tension structures are obviously geometrically nonlinear. In fact, analysis of cable truss is not only one of the most difficult nonlinear analyses because the analysis path may have rigid-body modes, but also a time consuming procedure. Non-linear theory allowing for large deflections is used. The flexibility of supporting members was observed to influence the stresses in the pane considerably in some cases. No other class of architectural structural systems is as dependent upon the use of digital computers as are tensile structures. Besides complexity, the process of design and analysis of tension structures presents a series of specificities, which usually lead to the use of special purpose programs, instead of general purpose programs (GPPs), such as ANSYS. In a special purpose program, part of the design know how is embedded in program routines. It is very probable that this type of program will be the option of the final user, in design offices. GPPs offer a range of types of analyses and modeling options. Besides, traditional GPPs are constantly being tested by a large number of users, and are updated according to their actual demands. This work discusses the use of ANSYS for the analysis and design of tension structures, such as cable truss structures under wind and gravity loadings. A model to describe the glass panels working in coordination with the cable truss was proposed. Under the proposed model, a FEM model of the glass panels working in coordination with the cable truss was established.

Keywords: Glass Construction material, Facades, Finite Element, Pre-Tensioned Cable Truss

Procedia PDF Downloads 280