Search results for: central light reflex
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6549

Search results for: central light reflex

429 Empowering South African Female Farmers through Organic Lamb Production: A Cost Analysis Case Study

Authors: J. M. Geyser

Abstract:

Lamb is a popular meat throughout the world, particularly in Europe, the Middle East and Oceania. However, the conventional lamb industry faces challenges related to environmental sustainability, climate change, consumer health and dwindling profit margins. This has stimulated an increasing demand for organic lamb, as it is perceived to increase environmental sustainability, offer superior quality, taste, and nutritional value, which is appealing to farmers, including small-scale and female farmers, as it often commands a premium price. Despite its advantages, organic lamb production presents challenges, with a significant hurdle being the high production costs encompassing organic certification, lower stocking rates, higher mortality rates and marketing cost. These costs impact the profitability and competitiveness or organic lamb producers, particularly female and small-scale farmers, who often encounter additional obstacles, such as limited access to resources and markets. Therefore, this paper examines the cost of producing organic lambs and its impact on female farmers and raises the research question: “Is organic lamb production the saving grace for female and small-scale farmers?” Objectives include estimating and comparing production costs and profitability or organic lamb production with conventional lamb production, analyzing influencing factors, and assessing opportunities and challenges for female and small-scale farmers. The hypothesis states that organic lamb production can be a viable and beneficial option for female and small-scale farmers, provided that they can overcome high production costs and access premium markets. The study uses a mixed-method approach, combining qualitative and quantitative data. Qualitative data involves semi-structured interviews with ten female and small-scale farmers engaged in organic lamb production in South Africa. The interview covered topics such as farm characteristics, practices, cost components, mortality rates, income sources and empowerment indicators. Quantitative data used secondary published information and primary data from a female farmer. The research findings indicate that when a female farmer moves from conventional lamb production to organic lamb production, the cost in the first year of organic lamb production exceed those of conventional lamb production by over 100%. This is due to lower stocking rates and higher mortality rates in the organic system. However, costs start decreasing in the second year as stocking rates increase due to manure applications on grazing and lower mortality rates due to better worm resistance in the herd. In conclusion, this article sheds light on the economic dynamics of organic lamb production, particularly focusing on its impact on female farmers. To empower female farmers and to promote sustainable agricultural practices, it is imperative to understand the cost structures and profitability of organic lamb production.

Keywords: cost analysis, empowerment, female farmers, organic lamb production

Procedia PDF Downloads 75
428 Digital Transformation in Fashion System Design: Tools and Opportunities

Authors: Margherita Tufarelli, Leonardo Giliberti, Elena Pucci

Abstract:

The fashion industry's interest in virtuality is linked, on the one hand, to the emotional and immersive possibilities of digital resources and the resulting languages and, on the other, to the greater efficiency that can be achieved throughout the value chain. The interaction between digital innovation and deep-rooted manufacturing traditions today translates into a paradigm shift for the entire fashion industry where, for example, the traditional values of industrial secrecy and know-how give way to experimentation in an open as well as participatory way, and the complete emancipation of virtual reality from actual 'reality'. The contribution aims to investigate the theme of digitisation in the Italian fashion industry, analysing its opportunities and the criticalities that have hindered its diffusion. There are two reasons why the most common approach in the fashion sector is still analogue: (i) the fashion product lives in close contact with the human body, so the sensory perception of materials plays a central role in both the use and the design of the product, but current technology is not able to restore the sense of touch; (ii) volumes are obtained by stitching flat surfaces that once assembled, given the flexibility of the material, can assume almost infinite configurations. Managing the fit and styling of virtual garments involves a wide range of factors, including mechanical simulation, collision detection, and user interface techniques for garment creation. After briefly reviewing some of the salient historical milestones in the resolution of problems related to the digital simulation of deformable materials and the user interface for the procedures for the realisation of the clothing system, the paper will describe the operation and possibilities offered today by the latest generation of specialised software. Parametric avatars and digital sartorial approach; drawing tools optimised for pattern making; materials both from the point of view of simulated physical behaviour and of aesthetic performance, tools for checking wearability, renderings, but also tools and procedures useful to companies both for dialogue with prototyping software and machinery and for managing the archive and the variants to be made. The article demonstrates how developments in technology and digital procedures now make it possible to intervene in different stages of design in the fashion industry. An integrated and additive process in which the constructed 3D models are usable both in the prototyping and communication of physical products and in the possible exclusively digital uses of 3D models in the new generation of virtual spaces. Mastering such tools requires the acquisition of specific digital skills and, at the same time, traditional skills for the design of the clothing system, but the benefits are manifold and applicable to different business dimensions. We are only at the beginning of the global digital transformation: the emergence of new professional figures and design dynamics leaves room for imagination, but in addition to applying digital tools to traditional procedures, traditional fashion know-how needs to be transferred into emerging digital practices to ensure the continuity of the technical-cultural heritage beyond the transformation.

Keywords: digital fashion, digital technology and couture, digital fashion communication, 3D garment simulation

Procedia PDF Downloads 74
427 Enhancing of Antibacterial Activity of Essential Oil by Rotating Magnetic Field

Authors: Tomasz Borowski, Dawid Sołoducha, Agata Markowska-Szczupak, Aneta Wesołowska, Marian Kordas, Rafał Rakoczy

Abstract:

Essential oils (EOs) are fragrant volatile oils obtained from plants. These are used for cooking (for flavor and aroma), cleaning, beauty (e.g., rosemary essential oil is used to promote hair growth), health (e.g. thyme essential oil cures arthritis, normalizes blood pressure, reduces stress on the heart, cures chest infection and cough) and in the food industry as preservatives and antioxidants. Rosemary and thyme essential oils are considered the most eminent herbs based on their history and medicinal properties. They possess a wide range of activity against different types of bacteria and fungi compared with the other oils in both in vitro and in vivo studies. However, traditional uses of EOs are limited due to rosemary and thyme oils in high concentrations can be toxic. In light of the accessible data, the following hypothesis was put forward: Low frequency rotating magnetic field (RMF) increases the antimicrobial potential of EOs. The aim of this work was to investigate the antimicrobial activity of commercial Salvia Rosmarinus L. and Thymus vulgaris L. essential oil from Polish company Avicenna-Oil under Rotating Magnetic Field (RMF) at f = 25 Hz. The self-constructed reactor (MAP) was applied for this study. The chemical composition of oils was determined by gas chromatography coupled with mass spectrometry (GC-MS). Model bacteria Escherichia coli K12 (ATCC 25922) was used. Minimum inhibitory concentrations (MIC) against E. coli were determined for the essential oils. Tested oils in very small concentrations were prepared (from 1 to 3 drops of essential oils per 3 mL working suspensions). From the results of disc diffusion assay and MIC tests, it can be concluded that thyme oil had the highest antibacterial activity against E. coli. Moreover, the study indicates the exposition to the RMF, as compared to the unexposed controls causing an increase in the efficacy of antibacterial properties of tested oils. The extended radiation exposure to RMF at the frequency f= 25 Hz beyond 160 minutes resulted in a significant increase in antibacterial potential against E. coli. Bacteria were killed within 40 minutes in thyme oil in lower tested concentration (1 drop of essential oils per 3 mL working suspension). Rapid decrease (>3 log) of bacteria number was observed with rosemary oil within 100 minutes (in concentration 3 drops of essential oils per 3 mL working suspension). Thus, a method for improving the antimicrobial performance of essential oil in low concentrations was developed. However, it still remains to be investigated how bacteria get killed by the EOs treated by an electromagnetic field. The possible mechanisms relies on alteration in the permeability of ionic channels in ionic channels in the bacterial cell walls that transport in the cells was proposed. For further studies, it is proposed to examine other types of essential oils and other antibiotic-resistant bacteria (ARB), which are causing a serious concern throughout the world.

Keywords: rotating magnetic field, rosemary, thyme, essential oils, Escherichia coli

Procedia PDF Downloads 157
426 Modeling the Present Economic and Social Alienation of Working Class in South Africa in the Musical Production ‘from Marikana to Mahagonny’ at Durban University of Technology (DUT)

Authors: Pamela Tancsik

Abstract:

The stage production in 2018, titled ‘From‘Marikana to Mahagonny’, began with a prologue in the form of the award-winning documentary ‘Miners Shot Down' by Rehad Desai, followed by Brecht/Weill’s song play or scenic cantata ‘Mahagonny’, premièred in Baden-Baden 1927. The central directorial concept of the DUT musical production ‘From Marikana to Mahagonny’ was to show a connection between the socio-political alienation of mineworkers in present-day South Africa and Brecht’s alienation effect in his scenic cantata ‘Mahagonny’. Marikana is a mining town about 50 km west of South Africa’s capital Pretoria. Mahagonny is a fantasy name for a utopian mining town in the United States. The characters, setting, and lyrics refer to America with of songs like ‘Benares’ and ‘Moon of Alabama’ and the use of typical American inventions such as dollars, saloons, and the telephone. The six singing characters in ‘Mahagonny’ all have typical American names: Charlie, Billy, Bobby, Jimmy, and the two girls they meet later are called Jessie and Bessie. The four men set off to seek Mahagonny. For them, it is the ultimate dream destination promising the fulfilment of all their desires, such as girls, alcohol, and dollars – in short, materialistic goals. Instead of finding a paradise, they experience how money and the practice of exploitive capitalism, and the lack of any moral and humanity is destroying their lives. In the end, Mahagonny gets demolished by a hurricane, an event which happened in 1926 in the United States. ‘God’ in person arrives disillusioned and bitter, complaining about violent and immoral mankind. In the end, he sends them all to hell. Charlie, Billy, Bobby, and Jimmy reply that this punishment does not mean anything to them because they have already been in hell for a long time – hell on earth is a reality, so the threat of hell after life is meaningless. Human life was also taken during the stand-off between striking mineworkers and the South African police on 16 August 2012. Miners from the Lonmin Platinum Mine went on an illegal strike, equipped with bush knives and spears. They were striking because their living conditions had never improved; they still lived in muddy shacks with no running water and electricity. Wages were as low as R4,000 (South African Rands), equivalent to just over 200 Euro per month. By August 2012, the negotiations between Lonmin management and the mineworkers’ unions, asking for a minimum wage of R12,500 per month, had failed. Police were sent in by the Government, and when the miners did not withdraw, the police shot at them. 34 were killed, some by bullets in their backs while running away and trying to hide behind rocks. In the musical play ‘From Marikana to Mahagonny’ audiences in South Africa are confronted with a documentary about Marikana, followed by Brecht/Weill’s scenic cantata, highlighting the tragic parallels between the Mahagonny story and characters from 1927 America and the Lonmin workers today in South Africa, showing that in 95 years, capitalism has not changed.

Keywords: alienation, brecht/Weill, mahagonny, marikana/South Africa, musical theatre

Procedia PDF Downloads 98
425 Treatment of Onshore Petroleum Drill Cuttings via Soil Washing Process: Characterization and Optimal Conditions

Authors: T. Poyai, P. Painmanakul, N. Chawaloesphonsiya, P. Dhanasin, C. Getwech, P. Wattana

Abstract:

Drilling is a key activity in oil and gas exploration and production. Drilling always requires the use of drilling mud for lubricating the drill bit and controlling the subsurface pressure. As drilling proceeds, a considerable amount of cuttings or rock fragments is generated. In general, water or Water Based Mud (WBM) serves as drilling fluid for the top hole section. The cuttings generated from this section is non-hazardous and normally applied as fill materials. On the other hand, drilling the bottom hole to reservoir section uses Synthetic Based Mud (SBM) of which synthetic oils are composed. The bottom-hole cuttings, SBM cuttings, is regarded as a hazardous waste, in accordance with the government regulations, due to the presence of hydrocarbons. Currently, the SBM cuttings are disposed of as an alternative fuel and raw material in cement kiln. Instead of burning, this work aims to propose an alternative for drill cuttings management under two ultimate goals: (1) reduction of hazardous waste volume; and (2) making use of the cleaned cuttings. Soil washing was selected as the major treatment process. The physiochemical properties of drill cuttings were analyzed, such as size fraction, pH, moisture content, and hydrocarbons. The particle size of cuttings was analyzed via light scattering method. Oil present in cuttings was quantified in terms of total petroleum hydrocarbon (TPH) through gas chromatography equipped with flame ionization detector (GC-FID). Other components were measured by the standard methods for soil analysis. Effects of different washing agents, liquid-to-solid (L/S) ratio, washing time, mixing speed, rinse-to-solid (R/S) ratio, and rinsing time were also evaluated. It was found that drill cuttings held the electrical conductivity of 3.84 dS/m, pH of 9.1, and moisture content of 7.5%. The TPH in cuttings existed in the diesel range with the concentration ranging from 20,000 to 30,000 mg/kg dry cuttings. A majority of cuttings particles held a mean diameter of 50 µm, which represented silt fraction. The results also suggested that a green solvent was considered most promising for cuttings treatment regarding occupational health, safety, and environmental benefits. The optimal washing conditions were obtained at L/S of 5, washing time of 15 min, mixing speed of 60 rpm, R/S of 10, and rinsing time of 1 min. After washing process, three fractions including clean cuttings, spent solvent, and wastewater were considered and provided with recommendations. The residual TPH less than 5,000 mg/kg was detected in clean cuttings. The treated cuttings can be then used for various purposes. The spent solvent held the calorific value of higher than 3,000 cal/g, which can be used as an alternative fuel. Otherwise, the recovery of the used solvent can be conducted using distillation or chromatography techniques. Finally, the generated wastewater can be combined with the produced water and simultaneously managed by re-injection into the reservoir.

Keywords: drill cuttings, green solvent, soil washing, total petroleum hydrocarbon (TPH)

Procedia PDF Downloads 155
424 Law of the River and Indigenous Water Rights: Reassessing the International Legal Frameworks for Indigenous Rights and Water Justice

Authors: Sultana Afrin Nipa

Abstract:

Life on Earth cannot thrive or survive without water. Water is intimately tied with community, culture, spirituality, identity, socio-economic progress, security, self-determination, and livelihood. Thus, access to water is a United Nations recognized human right due to its significance in these realms. However, there is often conflict between those who consider water as the spiritual and cultural value and those who consider it an economic value thus being threatened by economic development, corporate exploitation, government regulation, and increased privatization, highlighting the complex relationship between water and culture. The Colorado River basin is home to over 29 federally recognized tribal nations. To these tribes, it holds cultural, economic, and spiritual significance and often extends to deep human-to-non-human connections frequently precluded by the Westphalian regulations and settler laws. Despite the recognition of access to rivers as a fundamental human right by the United Nations, tribal communities and their water rights have been historically disregarded through inter alia, colonization, and dispossession of their resources. Law of the River such as ‘Winter’s Doctrine’, ‘Bureau of Reclamation (BOR)’ and ‘Colorado River Compact’ have shaped the water governance among the shareholders. However, tribal communities have been systematically excluded from these key agreements. While the Winter’s Doctrine acknowledged that tribes have the right to withdraw water from the rivers that pass through their reservations for self-sufficiency, the establishment of the BOR led to the construction of dams without tribal consultation, denying the ‘Winters’ regulation and violating these rights. The Colorado River Compact, which granted only 20% of the water to the tribes, diminishes the significance of international legal frameworks that prioritize indigenous self-determination and free pursuit of socio-economic and cultural development. Denial of this basic water right is the denial of the ‘recognition’ of their sovereignty and self-determination that questions the effectiveness of the international law. This review assesses the international legal frameworks concerning indigenous rights and water justice and aims to pinpoint gaps hindering the effective recognition and protection of Indigenous water rights in Colorado River Basin. This study draws on a combination of historical and qualitative data sets. The historical data encompasses the case settlements provided by the Bureau of Reclamation (BOR) respectively the notable cases of Native American water rights settlements on lower Colorado basin related to Arizona from 1979-2008. This material serves to substantiate the context of promises made to the Indigenous people and establishes connections between existing entities. The qualitative data consists of the observation of recorded meetings of the Central Arizona Project (CAP) to evaluate how the previously made promises are reflected now. The study finds a significant inconsistency in participation in the decision-making process and the lack of representation of Native American tribes in water resource management discussions. It highlights the ongoing challenges faced by the indigenous people to achieve their self-determination goal despite the legal arrangements.

Keywords: colorado river, indigenous rights, law of the river, water governance, water justice

Procedia PDF Downloads 37
423 Predicting OpenStreetMap Coverage by Means of Remote Sensing: The Case of Haiti

Authors: Ran Goldblatt, Nicholas Jones, Jennifer Mannix, Brad Bottoms

Abstract:

Accurate, complete, and up-to-date geospatial information is the foundation of successful disaster management. When the 2010 Haiti Earthquake struck, accurate and timely information on the distribution of critical infrastructure was essential for the disaster response community for effective search and rescue operations. Existing geospatial datasets such as Google Maps did not have comprehensive coverage of these features. In the days following the earthquake, many organizations released high-resolution satellite imagery, catalyzing a worldwide effort to map Haiti and support the recovery operations. Of these organizations, OpenStreetMap (OSM), a collaborative project to create a free editable map of the world, used the imagery to support volunteers to digitize roads, buildings, and other features, creating the most detailed map of Haiti in existence in just a few weeks. However, large portions of the island are still not fully covered by OSM. There is an increasing need for a tool to automatically identify which areas in Haiti, as well as in other countries vulnerable to disasters, that are not fully mapped. The objective of this project is to leverage different types of remote sensing measurements, together with machine learning approaches, in order to identify geographical areas where OSM coverage of building footprints is incomplete. Several remote sensing measures and derived products were assessed as potential predictors of OSM building footprints coverage, including: intensity of light emitted at night (based on VIIRS measurements), spectral indices derived from Sentinel-2 satellite (normalized difference vegetation index (NDVI), normalized difference built-up index (NDBI), soil-adjusted vegetation index (SAVI), urban index (UI)), surface texture (based on Sentinel-1 SAR measurements)), elevation and slope. Additional remote sensing derived products, such as Hansen Global Forest Change, DLR`s Global Urban Footprint (GUF), and World Settlement Footprint (WSF), were also evaluated as predictors, as well as OSM street and road network (including junctions). Using a supervised classification with a random forest classifier resulted in the prediction of 89% of the variation of OSM building footprint area in a given cell. These predictions allowed for the identification of cells that are predicted to be covered but are actually not mapped yet. With these results, this methodology could be adapted to any location to assist with preparing for future disastrous events and assure that essential geospatial information is available to support the response and recovery efforts during and following major disasters.

Keywords: disaster management, Haiti, machine learning, OpenStreetMap, remote sensing

Procedia PDF Downloads 125
422 Fire Safe Medical Oxygen Delivery for Aerospace Environments

Authors: M. A. Rahman, A. T. Ohta, H. V. Trinh, J. Hyvl

Abstract:

Atmospheric pressure and oxygen (O2) concentration are critical life support parameters for human-occupied aerospace vehicles and habitats. Various medical conditions may require medical O2; for example, the American Medical Association has determined that commercial air travel exposes passengers to altitude-related hypoxia and gas expansion. It may cause some passengers to experience significant symptoms and medical complications during the flight, requiring supplemental medical-grade O2 to maintain adequate tissue oxygenation and prevent hypoxemic complications. Although supplemental medical grade O2 is a successful lifesaver for respiratory and cardiac failure, O2-enriched exhaled air can contain more than 95 % O2, increasing the likelihood of a fire. In an aerospace environment, a localized high concentration O2 bubble forms around a patient being treated for hypoxia, increasing the cabin O2 beyond the safe limit. To address this problem, this work describes a medical O2 delivery system that can reduce the O2 concentration from patient-exhaled O2-rich air to safe levels while maintaining the prescribed O2 administration to the patient. The O2 delivery system is designed to be a part of the medical O2 kit. The system uses cationic multimetallic cobalt complexes to reversibly, selectively, and stoichiometrically chemisorb O2 from the exhaled air. An air-release sub-system monitors the exhaled air, and as soon the O2 percentage falls below 21%, the air is released to the room air. The O2-enriched exhaled air is channeled through a layer of porous, thin-film heaters coated with the cobalt complex. The complex absorbs O2, and when saturated, the complex is heated to 100°C using the thin-film heater. Upon heating, the complex desorbs O2 and is once again ready to absorb or remove the excess O2 from exhaled air. The O2 absorption is a sub-second process, and desorption is a multi-second process. While heating at 0.685 °C/sec, the complex desorbs ~90% O2 in 110 sec. These fast reaction times mean that a simultaneous absorb/desorb process in the O2 delivery system will create a continuous absorption of O2. Moreover, the complex can concentrate O2 by a factor of 160 times that in air and desorb over 90% of the O2 at 100°C. Over 12 cycles of thermogravimetry measurement, less than 0.1% decrease in reversibility in O2 uptake was observed. The 1 kg complex can desorb over 20L of O2, so simultaneous O2 desorption by 0.5 kg of complex and absorption by 0.5 kg of complex can potentially continuously remove 9L/min O2 (~90% desorbed at 100°C) from exhaled air. The complex is synthesized and characterized for reversible O2 absorption and efficacy. The complex changes its color from dark brown to light gray after O2 desorption. In addition to thermogravimetric analysis, the O2 absorption/desorption cycle is characterized using optical imaging, showing stable color changes over ten cycles. The complex was also tested at room temperature in a low O2 environment in its O2 desorbed state, and observed to hold the deoxygenated state under these conditions. The results show the feasibility of using the complex for reversible O2 absorption in the proposed fire safe medical O2 delivery system.

Keywords: fire risk, medical oxygen, oxygen removal, reversible absorption

Procedia PDF Downloads 104
421 Advancing the Analysis of Physical Activity Behaviour in Diverse, Rapidly Evolving Populations: Using Unsupervised Machine Learning to Segment and Cluster Accelerometer Data

Authors: Christopher Thornton, Niina Kolehmainen, Kianoush Nazarpour

Abstract:

Background: Accelerometers are widely used to measure physical activity behavior, including in children. The traditional method for processing acceleration data uses cut points, relying on calibration studies that relate the quantity of acceleration to energy expenditure. As these relationships do not generalise across diverse populations, they must be parametrised for each subpopulation, including different age groups, which is costly and makes studies across diverse populations difficult. A data-driven approach that allows physical activity intensity states to emerge from the data under study without relying on parameters derived from external populations offers a new perspective on this problem and potentially improved results. We evaluated the data-driven approach in a diverse population with a range of rapidly evolving physical and mental capabilities, namely very young children (9-38 months old), where this new approach may be particularly appropriate. Methods: We applied an unsupervised machine learning approach (a hidden semi-Markov model - HSMM) to segment and cluster the accelerometer data recorded from 275 children with a diverse range of physical and cognitive abilities. The HSMM was configured to identify a maximum of six physical activity intensity states and the output of the model was the time spent by each child in each of the states. For comparison, we also processed the accelerometer data using published cut points with available thresholds for the population. This provided us with time estimates for each child’s sedentary (SED), light physical activity (LPA), and moderate-to-vigorous physical activity (MVPA). Data on the children’s physical and cognitive abilities were collected using the Paediatric Evaluation of Disability Inventory (PEDI-CAT). Results: The HSMM identified two inactive states (INS, comparable to SED), two lightly active long duration states (LAS, comparable to LPA), and two short-duration high-intensity states (HIS, comparable to MVPA). Overall, the children spent on average 237/392 minutes per day in INS/SED, 211/129 minutes per day in LAS/LPA, and 178/168 minutes in HIS/MVPA. We found that INS overlapped with 53% of SED, LAS overlapped with 37% of LPA and HIS overlapped with 60% of MVPA. We also looked at the correlation between the time spent by a child in either HIS or MVPA and their physical and cognitive abilities. We found that HIS was more strongly correlated with physical mobility (R²HIS =0.5, R²MVPA= 0.28), cognitive ability (R²HIS =0.31, R²MVPA= 0.15), and age (R²HIS =0.15, R²MVPA= 0.09), indicating increased sensitivity to key attributes associated with a child’s mobility. Conclusion: An unsupervised machine learning technique can segment and cluster accelerometer data according to the intensity of movement at a given time. It provides a potentially more sensitive, appropriate, and cost-effective approach to analysing physical activity behavior in diverse populations, compared to the current cut points approach. This, in turn, supports research that is more inclusive across diverse populations.

Keywords: physical activity, machine learning, under 5s, disability, accelerometer

Procedia PDF Downloads 212
420 Imaging Spectrum of Central Nervous System Tuberculosis on Magnetic Resonance Imaging: Correlation with Clinical and Microbiological Results

Authors: Vasundhara Arora, Anupam Jhobta, Suresh Thakur, Sanjiv Sharma

Abstract:

Aims and Objectives: Intracranial tuberculosis (TB) is one of the most devastating manifestations of TB and a challenging public health issue of considerable importance and magnitude world over. This study elaborates on the imaging spectrum of neurotuberculosis on magnetic resonance imaging (MRI) in 29 clinically suspected cases from a tertiary care hospital. Materials and Methods: The prospective hospital based evaluation of MR imaging features of neuro-tuberculosis in 29 clinically suspected cases was carried out in Department of Radio-diagnosis, Indira Gandhi Medical Hospital from July 2017 to August 2018. MR Images were obtained on a 1.5 T Magnetom Avanto machine and were analyzed to identify any abnormal meningeal enhancement or parenchymal lesions. Microbiological and Biochemical CSF analysis was performed in radio-logically suspected cases and the results were compared with the imaging data. Clinical follow up of the patients started on anti-tuberculous treatment was done to evaluate the response to treatment and clinical outcome. Results: Age range of patients in the study was between 1 year to 73 years. The mean age of presentation was 11.5 years. No significant difference in the distribution of cerebral tuberculosis was noted among the two genders. Imaging findings of neuro-tuberculosis obtained were varied and non specific ranging from lepto-meningeal enhancement, cerebritis to space occupying lesions such as tuberculomas and tubercular abscesses. Complications presenting as hydrocephalus (n= 7) and infarcts (n=9) was noted in few of these patients. 29 patients showed radiological suspicion of CNS tuberculosis with meningitis alone observed in 11 cases, tuberculomas alone were observed in 4 cases, meningitis with parenchymal tuberculomas in 11 cases. Tubercular abscess and cerebritis were observed in one case each. Tuberculous arachnoiditis was noted in one patient. Gene expert positivity was obtained in 11 out of 29 radiologically suspected patients; none of the patients showed culture positivity. Meningeal form of the disease alone showed higher positivity rate of gene Xpert (n=5) followed by combination of meningeal and parenchymal forms of disease (n=4). The parenchymal manifestation of disease alone showed least positivity rates (n= 3) with gene xpert testing. All 29 patients were started on anti tubercular treatment based on radiological suspicion of the disease with clinical improvement observed in 27 treated patients. Conclusions: In our study, higher incidence of neuro- tuberculosis was noted in paediatric population with predominance of the meningeal form of the disease. Gene Xpert positivity obtained was low due to paucibacillary nature of cerebrospinal fluid (CSF) with even lower positivity of CSF samples in parenchymal form of the manifestation. MRI showed high accuracy in detecting CNS lesions in neuro-tuberculosis. Hence, it can be concluded that MRI plays a crucial role in the diagnosis because of its inherent sensitivity and specificity and is an indispensible imaging modality. It caters to the need of early diagnosis owing to poor sensitivity of microbiological tests more so in the parenchymal manifestation of the disease.

Keywords: neurotuberculosis, tubercular abscess, tuberculoma, tuberculous meningitis

Procedia PDF Downloads 173
419 The Late Bronze Age Archeometallurgy of Copper in Mountainous Colchis (Lechkhumi), Georgia

Authors: Nino Sulava, Brian Gilmour, Nana Rezesidze, Tamar Beridze, Rusudan Chagelishvili

Abstract:

Studies of ancient metallurgy are a subject of worldwide current interest. Georgia with its famous early metalworking traditions is one of the central parts of in the Caucasus region. The aim of the present study is to introduce the results of archaeometallurgical investigations being undertaken in the mountain region of Colchis, Lechkhumi (the Tsageri Municipality of western Georgia) and establish their place in the existing archaeological context. Lechkhumi (one of the historic provinces of Georgia known from Georgian, Greek, Byzantine and Armenian written sources as Lechkhumi/Skvimnia/Takveri) is the part of the Colchian mountain area. It is one of the important but little known centres of prehistoric metallurgy in the Caucasian region and of Colchian Bronze Age culture. Reconnaissance archaeological expeditions (2011-2015) revealed significant prehistoric metallurgical sites in Lechkhumi. Sites located in the vicinity of Dogurashi Village (Tsageri Municipality) have become the target area for archaeological excavations. During archaeological excavations conducted in 2016-2018 two archaeometallurgical sites – Dogurashi I and Dogurashi II were investigated. As a result of an interdisciplinary (archaeological, geological and geophysical) survey, it has been established that at both prehistoric Dogurashi mountain sites, it was copper that was being smelted and the ore sources are likely to be of local origin. Radiocarbon dating results confirm they were operating between about the 13th and 9th century BC. More recently another similar site has been identified in this area (Dogurashi III), and this is about to undergo detailed investigation. Other prehistoric metallurgical sites are being located and investigated in the Lechkhumi region as well as chance archaeological finds (often in hoards) – copper ingots, metallurgical production debris, slag, fragments of crucibles, tuyeres (air delivery pipes), furnace wall fragments and other related waste debris. Other chance finds being investigated are the many copper, bronze and (some) iron artefacts that have been found over many years. These include copper ingots, copper, bronze and iron artefacts such as tools, jewelry, and decorative items. These show the important but little known or understood the role of Lechkhumi in the late Bronze Age culture of Colchis. It would seem that mining and metallurgical manufacture form part of the local agricultural yearly lifecycle. Colchian ceramics have been found and also evidence for artefact production, small stone mould fragments and encrusted material from the casting of a fylfot (swastika) form of Colchian bronze buckle found in the vicinities of the early settlements of Tskheta and Dekhviri. Excavation and investigation of previously unknown archaeometallurgical sites in Lechkhumi will contribute significantly to the knowledge and understanding of prehistoric Colchian metallurgy in western Georgia (Adjara, Guria, Samegrelo, and Svaneti) and will reveal the importance of this region in the study of ancient metallurgy in Georgia and the Caucasus. Acknowledgment: This work has been supported by the Shota Rustaveli National Science Foundation (grant FR # 217128).

Keywords: archaeometallurgy, Colchis, copper, Lechkhumi

Procedia PDF Downloads 136
418 Navigating the Digital Landscape: An Ethnographic Content Analysis of Black Youth's Encounters with Racially Traumatic Content on Social Media

Authors: Tiera Tanksley, Amanda M. McLeroy

Abstract:

The advent of technology and social media has ushered in a new era of communication, providing platforms for news dissemination and cause advocacy. However, this digital landscape has also exposed a distressing phenomenon termed "Black death," or trauma porn. This paper delves into the profound effects of repeated exposure to traumatic content on Black youth via social media, exploring the psychological impacts and potential reinforcing of stereotypes. Employing Critical Race Technology Theory (CRTT), the study sheds light on algorithmic anti-blackness and its influence on Black youth's lives and educational experiences. Through ethnographic content analysis, the research investigates common manifestations of Black death encountered online by Black adolescents. Findings unveil distressing viral videos, traumatic images, racial slurs, and hate speech, perpetuating stereotypes. However, amidst the distress, the study identifies narratives of activism and social justice on social media platforms, empowering Black youth to engage in positive change. Coping mechanisms and community support emerge as significant factors in navigating the digital landscape. The study underscores the need for comprehensive interventions and policies informed by evidence-based research. By addressing algorithmic anti-blackness and promoting digital resilience, the paper advocates for a more empathetic and inclusive online environment. Understanding coping mechanisms and community support becomes imperative for fostering mental well-being among Black adolescents navigating social media. In education, the implications are substantial. Acknowledging the impact of Black death content, educators play a pivotal role in promoting media literacy and digital resilience. Creating inclusive and safe online spaces, educators can mitigate negative effects and encourage open discussions about traumatic content. The application of CRTT in educational technology emphasizes dismantling systemic biases and promoting equity. In conclusion, this study calls for educators to be cognizant of the impact of Black death content on social media. By prioritizing media literacy, fostering digital resilience, and advocating for unbiased technologies, educators contribute to an inclusive and just educational environment for all students, irrespective of their race or background. Addressing challenges related to Black death content proactively ensures the well-being and mental health of Black adolescents, fostering an empathetic and inclusive digital space.

Keywords: algorithmic anti-Blackness, digital resilience, media literacy, traumatic content

Procedia PDF Downloads 59
417 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley

Abstract:

Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.

Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 79
416 The Sustained Utility of Japan's Human Security Policy

Authors: Maria Thaemar Tana

Abstract:

The paper examines the policy and practice of Japan’s human security. Specifically, it asks the question: How does Japan’s shift towards a more proactive defence posture affect the place of human security in its foreign policy agenda? Corollary to this, how is Japan sustaining its human security policy? The objective of this research is to understand how Japan, chiefly through the Ministry of Foreign Affairs (MOFA) and JICA (Japan International Cooperation Agency), sustains the concept of human security as a policy framework. In addition, the paper also aims to show how and why Japan continues to include the concept in its overall foreign policy agenda. In light of the recent developments in Japan’s security policy, which essentially result from the changing security environment, human security appears to be gradually losing relevance. The paper, however, argues that despite the strategic challenges Japan faced and is facing, as well as the apparent decline of its economic diplomacy, human security remains to be an area of critical importance for Japanese foreign policy. In fact, as Japan becomes more proactive in its international affairs, the strategic value of human security also increases. Human security was initially envisioned to help Japan compensate for its weaknesses in the areas of traditional security, but as Japan moves closer to a more activist foreign policy, the soft policy of human security complements its hard security policies. Using the framework of neoclassical realism (NCR), the paper recognizes that policy-making is essentially a convergence of incentives and constraints at the international and domestic levels. The theory posits that there is no perfect 'transmission belt' linking material power on the one hand, and actual foreign policy on the other. State behavior is influenced by both international- and domestic-level variables, but while systemic pressures and incentives determine the general direction of foreign policy, they are not strong enough to affect the exact details of state conduct. Internal factors such as leaders’ perceptions, domestic institutions, and domestic norms, serve as intervening variables between the international system and foreign policy. Thus, applied to this study, Japan’s sustained utilization of human security as a foreign policy instrument (dependent variable) is essentially a result of systemic pressures (indirectly) (independent variables) and domestic processes (directly) (intervening variables). Two cases of Japan’s human security practice in two regions are examined in two time periods: Iraq in the Middle East (2001-2010) and South Sudan in Africa (2011-2017). The cases show that despite the different motives behind Japan’s decision to participate in these international peacekeepings ad peace-building operations, human security continues to be incorporated in both rhetoric and practice, thus demonstrating that it was and remains to be an important diplomatic tool. Different variables at the international and domestic levels will be examined to understand how the interaction among them results in changes and continuities in Japan’s human security policy.

Keywords: human security, foreign policy, neoclassical realism, peace-building

Procedia PDF Downloads 135
415 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 343
414 The Use of Emerging Technologies in Higher Education Institutions: A Case of Nelson Mandela University, South Africa

Authors: Ayanda P. Deliwe, Storm B. Watson

Abstract:

The COVID-19 pandemic has disrupted the established practices of higher education institutions (HEIs). Most higher education institutions worldwide had to shift from traditional face-to-face to online learning. The online environment and new online tools are disrupting the way in which higher education is presented. Furthermore, the structures of higher education institutions have been impacted by rapid advancements in information and communication technologies. Emerging technologies should not be viewed in a negative light because, as opposed to the traditional curriculum that worked to create productive and efficient researchers, emerging technologies encourage creativity and innovation. Therefore, using technology together with traditional means will enhance teaching and learning. Emerging technologies in higher education not only change the experience of students, lecturers, and the content, but it is also influencing the attraction and retention of students. Higher education institutions are under immense pressure because not only are they competing locally and nationally, but emerging technologies also expand the competition internationally. Emerging technologies have eliminated border barriers, allowing students to study in the country of their choice regardless of where they are in the world. Higher education institutions are becoming indifferent as technology is finding its way into the lecture room day by day. Academics need to utilise technology at their disposal if they want to get through to their students. Academics are now competing for students' attention with social media platforms such as WhatsApp, Snapchat, Instagram, Facebook, TikTok, and others. This is posing a significant challenge to higher education institutions. It is, therefore, critical to pay attention to emerging technologies in order to see how they can be incorporated into the classroom in order to improve educational quality while remaining relevant in the work industry. This study aims to understand how emerging technologies have been utilised at Nelson Mandela University in presenting teaching and learning activities since April 2020. The primary objective of this study is to analyse how academics are incorporating emerging technologies in their teaching and learning activities. This primary objective was achieved by conducting a literature review on clarifying and conceptualising the emerging technologies being utilised by higher education institutions, reviewing and analysing the use of emerging technologies, and will further be investigated through an empirical analysis of the use of emerging technologies at Nelson Mandela University. Findings from the literature review revealed that emerging technology is impacting several key areas in higher education institutions, such as the attraction and retention of students, enhancement of teaching and learning, increase in global competition, elimination of border barriers, and highlighting the digital divide. The literature review further identified that learning management systems, open educational resources, learning analytics, and artificial intelligence are the most prevalent emerging technologies being used in higher education institutions. The identified emerging technologies will be further analysed through an empirical analysis to identify how they are being utilised at Nelson Mandela University.

Keywords: artificial intelligence, emerging technologies, learning analytics, learner management systems, open educational resources

Procedia PDF Downloads 69
413 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing

Authors: Rowan P. Martnishn

Abstract:

During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.

Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding

Procedia PDF Downloads 31
412 Qualitative Research on German Household Practices to Ease the Risk of Poverty

Authors: Marie Boost

Abstract:

Despite activation policies, forced personal initiative to step out of unemployment and a general prosper economic situation, poverty and financial hardship constitute a crucial role in the daily lives of many families in Germany. In 2015, ~16 million persons (20.2%) of the German population are at risk of poverty or social exclusion. This is illustrated by an unemployment rate of 13.3% in the research area, located in East Germany. Despite this high amount of persons living in vulnerable households, we know little about how they manage to stabilize their lives or even overcome poverty – apart from solely relying on welfare state benefits or entering in a stable, well-paid job. Most of them are struggling in precarious living circumstances, switching from one or several short-term, low-paid jobs into self-employment or unemployment, sometimes accompanied by welfare state benefits. Hence, insecurity and uncertain future expectation form a crucial part of their lives. Within the EU-funded project “RESCuE”, resilient practices of vulnerable households were investigated in nine European countries. Approximately, 15 expert interviews with policy makers, representatives from welfare state agencies, NGOs and charity organizations and 25 household interviews have been conducted within each country. It aims to find out more about the chances and conditions of social resilience. The research is based on the triangulation of biographical narrative interviews, followed by participatory photo interviews, asking the household members to portray their typical everyday life. The presentation is focusing on the explanatory strength of this mixed-methods approach in order to show the potential of household practices to overcome financial hardship. The methodological combination allows an in-depth analysis of the families and households everyday living circumstances, including their poverty and employment situation, whether formal and informal. Active household budgeting practices, such as saving and consumption practices are based on subsistence or Do-It-Yourself work. Especially due to the photo-interviews, the importance of inherent cultural and tacit knowledge becomes obvious as it pictures their typical practices, like cultivation and gathering fruits and vegetables or going fishing. One of the central findings is the multiple purposes of these practices. They contribute to ease financial burden through consumption reduction and strengthen social ties, as they are mostly conducted with close friends or family members. In general, non-commodified practices are found to be re-commodified and to contribute to ease financial hardship, e.g. by the use of commons, barter trade or simple mutual exchange (gift exchange). These practices can substitute external purchases and reduce expenses or even generate a small income. Mixing different income sources are found to be the most likely way out of poverty within the context of a precarious labor market. But these resilient household practices take its toll as they are highly preconditioned, and many persons put themselves into risk of overstressing themselves. Thus, the potentials and risks of resilient household practices are reflected in the presentation.

Keywords: consumption practices, labor market, qualitative research, resilience

Procedia PDF Downloads 221
411 Experimental and Numerical Investigations on the Vulnerability of Flying Structures to High-Energy Laser Irradiations

Authors: Vadim Allheily, Rudiger Schmitt, Lionel Merlat, Gildas L'Hostis

Abstract:

Inflight devices are nowadays major actors in both military and civilian landscapes. Among others, missiles, mortars, rockets or even drones this last decade are increasingly sophisticated, and it is today of prior manner to develop always more efficient defensive systems from all these potential threats. In this frame, recent High Energy Laser weapon prototypes (HEL) have demonstrated some extremely good operational abilities to shot down within seconds flying targets several kilometers off. Whereas test outcomes are promising from both experimental and cost-related perspectives, the deterioration process still needs to be explored to be able to closely predict the effects of a high-energy laser irradiation on typical structures, heading finally to an effective design of laser sources and protective countermeasures. Laser matter interaction researches have a long history of more than 40 years at the French-German Research Institute (ISL). Those studies were tied with laser sources development in the mid-60s, mainly for specific metrology of fast phenomena. Nowadays, laser matter interaction can be viewed as the terminal ballistics of conventional weapons, with the unique capability of laser beams to carry energy at light velocity over large ranges. In the last years, a strong focus was made at ISL on the interaction process of laser radiation with metal targets such as artillery shells. Due to the absorbed laser radiation and the resulting heating process, an encased explosive charge can be initiated resulting in deflagration or even detonation of the projectile in flight. Drones and Unmanned Air Vehicles (UAVs) are of outmost interests in modern warfare. Those aerial systems are usually made up of polymer-based composite materials, whose complexity involves new scientific challenges. Aside this main laser-matter interaction activity, a lot of experimental and numerical knowledge has been gathered at ISL within domains like spectrometry, thermodynamics or mechanics. Techniques and devices were developed to study separately each aspect concerned by this topic; optical characterization, thermal investigations, chemical reactions analysis or mechanical examinations are beyond carried out to neatly estimate essential key values. Results from these diverse tasks are then incorporated into analytic or FE numerical models that were elaborated, for example, to predict thermal repercussion on explosive charges or mechanical failures of structures. These simulations highlight the influence of each phenomenon during the laser irradiation and forecast experimental observations with good accuracy.

Keywords: composite materials, countermeasure, experimental work, high-energy laser, laser-matter interaction, modeling

Procedia PDF Downloads 263
410 The Practices Perspective in Communication, Consumer and Cultural Studies: A Post-Heideggerian Narrative

Authors: Tony Wilson

Abstract:

This paper sets out a practices perspective or practices theory, which has become pervasive from business to sociological studies. In doing so, it locates the perspective historically (in the work of the philosopher Heidegger) and provides a contemporary illustration of its application to communication, consumer and cultural studies as central to this conference theme. The structured account of practices (as articulated in eight ‘axioms’) presented towards the conclusion of this paper is an initial statement - planned to encourage further detailed qualitative and systematic research in areas of interest to the conference. Practice theories of equipped and situated construction of participatory meaning (as in media and marketing consuming) are frequently characterized as lacking common ground, or core principles. This paper explores whether by retracing a journey to earlier philosophical underwriting, a shared territory promoting new research can be located as current philosophical hermeneutics. Moreover, through returning to hermeneutic first principles, the paper shows that a series of spatio-temporal metaphors become available - appropriate to analyzing communication as a process across disciplines in which it is considered. Thus one can argue, for instance, that media users engage (enter) digital text from their diverse ‘horizons of expectation’, in a productive enlarging ‘fusion’ of horizons of understanding, thereby ‘projecting’ a new narrative, integrated in a ‘hermeneutic circle’ of meaning. A politics of communication studies may contest a horizon of understanding - so engaging in critical ‘distancing’. Marketing’s consumers can occupy particular places on a horizon of understanding. Media users pass over borders of changing, revised perspectives. Practices research can now not only be discerned in multiple disciplines but equally crosses disciplines. The ubiquitous practice of media use by managers and visitors in a shopping mall - the mediatization of malls - responds to investigating not just with media study expertise, but from an interpretive marketing perspective. How have mediated identities of person or place been changed? Emphasizing understanding of entities in a material environment as ‘equipment’, practices theory enables the quantitative correlation of use and demographic variable as ‘Zeug Score’. Human behavior is fundamentally habitual - shaped by its tacit assumptions - occasionally interrupted by reflection. Practices theory acknowledges such action to be minimally monitored yet nonetheless considers it as constructing narrative. Thus presented in research, ‘storied’ behavior can then be seen to be (in)formed and shaped from a shifting hierarchy of ‘horizons’ or of perspectives - from habituated to reflective - rather than a single seamless narrative. Taking a communication practices perspective here avoids conflating tacit, transformative and theoretical understanding in research. In short, a historically grounded and unifying statement of contemporary practices theory will enhance its potential as a tool in communication, consumer and cultural research, landscaping interpretative horizons of human behaviour through exploring widely the culturally (in)formed narratives equipping and incorporated (reflectively, unreflectively) in people’s everyday lives.

Keywords: communication, consumer, cultural practices, hermeneutics

Procedia PDF Downloads 269
409 The Importance of Fruit Trees for Prescribed Burning in a South American Savanna

Authors: Rodrigo M. Falleiro, Joaquim P. L. Parime, Luciano C. Santos, Rodrigo D. Silva

Abstract:

The Cerrado biome is the most biodiverse savanna on the planet. Located in central Brazil, its preservation is seriously threatened by the advance of intensive agriculture and livestock. Conservation Units and Indigenous Lands are increasingly isolated and subject to mega wildfires. Among the characteristics of this savanna, we highlight the high rate of primary biomass production and the reduced occurrence of large grazing animals. In this biome, the predominant fauna is more dependent on the fruits produced by the dicotyledonous species in relation to other tropical savannas. Fire is a key element in the balance between mono and dicotyledons or between the arboreal and herbaceous strata. Therefore, applying fire regimes that maintain the balance between these strata without harming fruit production is essential in the conservation strategies of Cerrado's biodiversity. Recently, Integrated Fire Management has started to be implemented in Brazilian protected areas. As a result, management with prescribed burns has increasingly replaced strategies based on fire exclusion, which in practice have resulted in large wildfires, with highly negative impacts on fruit and fauna production. In the Indigenous Lands, these fires were carried out respecting traditional knowledge. The indigenous people showed great concern about the effects of fire on fruit plants and important animals. They recommended that the burns be carried out between April and May, as it would result in a greater production of edible fruits ("fruiting burning"). In other tropical savannas in the southern hemisphere, the preferential period tends to be later, in the middle of the dry season, when the grasses are dormant (June to August). However, in the Cerrado, this late period coincides with the flowering and sprouting of several important fruit species. To verify the best burning season, the present work evaluated the effects of fire on flowering and fruit production of theByrsonima sp., Mouriri pusa, Caryocar brasiliense, Anacardium occidentale, Pouteria ramiflora, Hancornia speciosa, Byrsonima verbascifolia, Anacardium humille and Talisia subalbens. The evaluations were carried out in the field, covering 31 Indigenous Lands that cover 104,241.18 Km², where 3,386 prescribed burns were carried out between 2015 and 2018. The burning periods were divided into early (carried out during the rainy season), modal or “fruiting” (carried out during the transition between seasons) and late (carried out in the middle of the dry season, when the grasses are dormant). The results corroborate the traditional knowledge, demonstrating that the modal burns result in higher rates of reproduction and fruit production. Late burns showed intermediate results, followed by early burns. We conclude that management strategies based mainly on forage production, which are usually applied in savannas populated by grazing ungulates, may not be the best management strategy for South American savannas. The effects of fire on fruit plants, which have a particular phenologicalsynchronization with the fauna cycle, also need to be observed during the prescription of burns.

Keywords: cerrado biome, fire regimes, native fruits, prescribed burns

Procedia PDF Downloads 218
408 Universal Health Coverage 2019 in Indonesia: The Integration of Family Planning Services in Current Functioning Health System

Authors: Fathonah Siti, Ardiana Irma

Abstract:

Indonesia is currently on its track to achieve Universal Health Coverage (UHC) by 2019. The program aims to address issues on disintegration in the implementation and coverage of various health insurance schemes and fragmented fund pooling. Family planning service is covered as one of benefit packages under preventive care. However, little has been done to examine how family planning program are appropriately managed across levels of governments and how family planning services are delivered to the end user. The study is performed through focus group discussion to related policy makers and selected programmers at central and district levels. The study is also benefited from relevant studies on family planning in the UHC scheme and other supporting data. The study carefully investigates some programmatic implications when family planning is integrated in the UHC program encompassing the need to recalculate contraceptive logistics for beneficiaries (eligible couple); policy reformulation for contraceptive service provision including supply chain management; establishment of family planning standard of procedure; and a call to update Management Information System. The study confirms that there is a significant increase in the numbers of contraceptive commodities needs to be procured by the government. Holding an assumption that contraceptive prevalence rate and commodities cost will be as expected increasing at 0.5% annually, the government need to allocate almost IDR 5 billion by 2019, excluded fee for service. The government shifts its focus to maintain eligible health facilities under National Population and Family Planning Board networks. By 2019, the government has set strategies to anticipate the provision of family planning services to 45.340 health facilities distributed in 514 districts and 7 thousand sub districts. Clear division of authorities has been established among levels of governments. Three models of contraceptive supply planning have been developed and currently in the process of being institutionalized. Pre service training for family planning services has been piloted in 10 prominent universities. The position of private midwives has been appreciated as part of the system. To ensure the implementation of quality and health expenditure control, family planning standard has been established as a reference to determine set of services required to deliver to the clients properly and types of health facilities to conduct particular family planning services. Recognition to individual status of program participation has been acknowledged in the Family Enumeration since 2015. The data is precisely recorded by name by address for each family and its members. It supplies valuable information to 15.131 Family Planning Field Workers (FPFWs) to provide information and education related to family planning in an attempt to generate demand and maintain the participation of family planning acceptors who are program beneficiaries. Despite overwhelming efforts described above, some obstacles remain. The program experiences poor socialization and yet removes geographical barriers for those living in remote areas. Family planning services provided for this sub population conducted outside the scheme as a complement strategy. However, UHC program has brought remarkable improvement in access and quality of family planning services.

Keywords: beneficiary, family planning services, national population and family planning board, universal health coverage

Procedia PDF Downloads 190
407 The International Prohibition of Religiously-Motivated 'Incitement' to Violence

Authors: J. D. Temperman

Abstract:

Introduction: In particular, in relation to religion, the meaning and scope of freedom of expression have been tested in recent times. This paper investigates the legal justifications for restrictions that have been suggested in this area and asks whether they are sustainable from an international human rights perspective. The universal human rights instruments, particularly the UN International Covenant on Civil and Political Rights (ICCPR), are increasingly geared towards eradicating ‘incitement’ to contingent harms like violence or discrimination, whilst forms of extreme speech that fall short of such incitement are to be protected rather than countered by states. Human Rights Committee’s draft-General Comment on freedom of expression, adopted in 2011, provides another strong indication that this is the envisaged way forward: repealing anti-blasphemy and anti-religious defamation laws, whilst simultaneously increasing efforts to combat ‘incitement’. Within regional human rights frameworks, notably the European Convention system, judgments have in fact supported legal restrictions on both hate speech, holocaust denial, and blasphemy or religious defamation. Major contributions to scholarship: This paper proposes an actus reus for the offense of ‘advocacy of religious hatred that constitutes incitement to discrimination or violence’, as enshrined in Article 20(2) of the UN ICCPR. In underscoring the high threshold of ‘incitement’, the author distinguishes this offense from such notions as ‘blasphemy’ or ‘defamation of religions’. In addition to treating the said provision as a sui generis prohibition, the question is addresses whether a ‘right to be protected against incitement’ may be distilled from the ICCPR. Furthermore, the author will discuss the question of how to judge incitement; notably, is mens rea required to convict someone of incitement, and if so, what degree of mens rea? This analysis also includes the question how to balance content and context factors when addressing alleged instances of incitement, notably what factors make provide for a likelihood that imminent acts of violence or discrimination will ensue from an inciteful speech act? Methodology: This paper takes a double comparative approach: (i) it endeavours to compare and contrast monitoring bodies’ approach to incitement (notably, the UN Human Rights Committee, but also the UN Committee on the Elimination of Racial Discrimination which monitors states’ compliance with Article 4 of ICERD on incitement); and (ii) it endeavours to chart and compare and analyse from an international human rights perspective recent forms of state practice in the field of dealing with incitement (i.e. a comparative legal analysis and vertical human rights analysis of newly emerging incitement legislation in the light of the said international standards). Conclusion: This paper conceptualizes a legal notion – ‘incitement’ – encapsulated in international human rights law that may have a profound bearing on contemporary challenges of radicalization and religious strife.

Keywords: incitement, international human rights law, religious hatred, violence

Procedia PDF Downloads 308
406 Tectonics of Out-of-Sequence Thrusting in NW Himachal Himalaya, India

Authors: Rajkumar Ghosh

Abstract:

Jhakri Thrust (JT), Sarahan Thrust (ST), and Chaura Thrust (CT) are the three OOST along Jakhri-Chaura segment along the Sutlej river valley in Himachal Pradesh. CT is deciphered only by Apatite Fission Track dating. Such geochronological information is not currently accessible for the Jhakri and Sarahan thrusts. JT was additionally validated as OOST without any dating. The described rock types include ductile sheared gneisses and upper greenschist-amphibolite facies metamorphosed schists. Locally, the Munsiari (Jutogh) Thrust is referred to as the JT. Brittle shear, the JT, borders the research area's southern and ductile shear, the CT, and its northern margins. The JT has a 50° western dip and is south-westward verging. It is 15–17 km deep. A progressive rise in strain towards the JT zone based on microstructural tests was observed by previous researchers. The high-temperature ranges of the MCT root zone are cited in the current work as supportive evidence for the ductile nature of the OOST. In Himachal Pradesh, the lithological boundaries for OOST are not set. In contrast, the Sarahan thrust is NW-SE striking and 50-80 m wide. ST and CT are probably equivalent and marked by a sheared biotite-chlorite matrix with a top-to-SE kinematic indicator. It is inferred from cross-section balancing that the CT is folded with this anticlinorium. These thrust systems consist of several branches, some of which are still active. The thrust system exhibits complex internal geometry consisting of box folds, boudins, scar folds, crenulation cleavages, kink folds, and tension gashes. Box folds are observed on the hanging wall of the Chaura thrust. The ductile signature of CT represents steepen downward of the thrust. After the STDSU stopped deformation, out-of-sequence thrust was initiated in some sections of the Higher Himalaya. A part of GHC and part of the LH is thrust southwestward along the Jutogh Thrust/Munsiari Thrust/JT as also the Jutogh Nappe. The CT is concealed beneath Jutogh Thrust sheet hence the basal part of GHC is unexposed to the surface in Sutlej River section. Fieldwork and micro-structural studies of the Greater Himalayan Crystalline (GHC) along the Sutlej section reveal (a) initial top-to-SW sense of ductile shearing (CT); (b) brittle-ductile extension (ST); and (c) uniform top-to-SW sense of brittle shearing (JT). A group of samples of schistose rock from Jutogh Group of Greater Himalayan Crystalline and Quartzite from Rampur Group of Lesser Himalayan Crystalline were analyzed. No such physiographic transition in that area is to determine a break in the landscape due to OOST. OOSTs from GHC are interpreted mainly from geochronological studies to date, but proper field evidence is missing. Apart from minimal documentation in geological mapping for OOST, there exists a lack of suitable exposure of rock to generalize the features of OOST in the field in NW Higher Himalaya. Multiple sets of thrust planes may be activated within this zone or a zone along which OOST is engaged.

Keywords: out-of-sequence thrust, main central thrust, grain boundary migration, South Tibetan detachment system, Jakhri Thrust, Sarahan Thrust, Chaura Thrust, higher Himalaya, greater Himalayan crystalline

Procedia PDF Downloads 72
405 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada

Authors: Stefan W. Kienzle

Abstract:

The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.

Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes

Procedia PDF Downloads 93
404 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters

Authors: Trevor C. Brown, David J. Miron

Abstract:

Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.

Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics

Procedia PDF Downloads 234
403 Potential of Polyphenols from Tamarix Gallica towards Common Pathological Features of Diabetes and Alzheimer’s Diseases

Authors: Asma Ben Hmidene, Mizuho Hanaki, Kazuma Murakami, Kazuhiro Irie, Hiroko Isoda, Hideyuki Shigemori

Abstract:

Type 2 diabetes mellitus (T2DM) and Alzheimer’s disease (AD) are characterized as a peripheral metabolic disorder and a degenerative disease of the central nervous system, respectively. It is now widely recognized that T2DM and AD share many pathophysiological features including glucose metabolism, increased oxidative stress and amyloid aggregation. Amyloid beta (Aβ) is the components of the amyloid deposits in the AD brain and while the component of the amyloidogenic peptide deposit in the pancreatic islets of Langerhans is identified as human islet amyloid polypeptide (hIAPP). These two proteins are originated from the amyloid precursor protein and have a high sequence similarity. Although the amino acid sequences of amyloidogenic proteins are diverse, they all adopt a similar structure in aggregates called cross-beta-spine. Add at that, extensive studies in the past years have found that like Aβ1-42, IAPP forms early intermediate assemblies as spherical oligomers, implicating that these oligomers possess a common folding pattern or conformation. These similarities can be used in the search for effective pharmacotherapy for DM, since potent therapeutic agents such as antioxidants with a catechol moiety, proved to inhibit Aβ aggregation, may play a key role in the inhibit the aggregation of hIAPP treatment of patients with DM. Tamarix gallica is one of the halophyte species having a powerful antioxidant system. Although it was traditionally used for the treatment of various liver metabolic disorders, there is no report about the use of this plant for the treatment or prevention of T2DM and AD. Therefore, the aim of this work is to investigate their protective effect towards T2DM and AD by isolation and identification of α-glucosidase inhibitors, with antioxidant potential, that play an important role in the glucose metabolism in diabetic patient, as well as, the polymerization of hIAPP and Aβ aggregation inhibitors. Structure-activity relationship study was conducted for both assays. And as for α-glucosidase inhibitors, their mechanism of action and their synergistic potential when applied with a very low concentration of acarbose were also suggesting that they can be used not only as α-glucosidase inhibitors but also be combined with established α-glucosidase inhibitors to reduce their adverse effect. The antioxidant potential of the purified substances was evaluated by DPPH and SOD assays. Th-T assay using 42-mer amyloid β-protein (Aβ42) for AD and hIAPP which is a 37-residue peptide secreted by the pancreatic β –cells for T2DM and Transmission electronic microscopy (TEM) were conducted to evaluate the amyloid aggragation of the actives substances. For α-glucosidase, p-NPG and glucose oxidase assays were performed for determining the inhibition potential and structure-activity relationship study. The Enzyme kinetic protocol was used to study the mechanism of action. From this research, it was concluded that polyphenols playing a role in the glucose metabolism and oxidative stress can also inhibit the amyloid aggregation, and that substances with a catechol and glucuronide moieties inhibiting amyloid-β aggregation, might be used to inhibit the aggregation of hIAPP.

Keywords: α-glucosidase inhibitors, amyloid aggregation inhibition, mechanism of action, polyphenols, structure activity relationship, synergistic potential, tamarix gallica

Procedia PDF Downloads 280
402 How Restorative Justice Can Inform and Assist the Provision of Effective Remedies to Hate Crime, Case Study: The Christchurch Terrorist Attack

Authors: Daniel O. Kleinsman

Abstract:

The 2019 terrorist attack on two masjidain in Christchurch, New Zealand, was a shocking demonstration of the harm that can be caused by hate crime. As legal and governmental responses to the attack struggle to provide effective remedies to its victims, restorative justice has emerged as a tool that can assist, in terms of both meeting victims’ needs and discharging the obligations of the state under the International Covenant on Civil and Political Rights (ICCPR), arts 2(3), 26, 27. Restorative justice is a model that emphasizes the repair of harm caused or revealed by unjust behavior. It also prioritises the facilitation of dialogue, the restoration of equitable relationships, and the prevention of future harm. Returning to the case study, in the remarks of the sentencing judge, the terrorist’s actions were described as a hate crime of vicious malevolence that the Court was required to decisively reject, as anathema to the values of acceptance, tolerance and mutual respect upon which New Zealand’s inclusive society is based and which the country strives to maintain. This was one of the reasons for which the terrorist received a life sentence with no possibility of parole. However, in the report of the Royal Commission of Inquiry into the Attack, it was found that victims felt the attack occurred within the context of widespread racism, discrimination and Islamophobia, where hostile behaviors, including hate-based threats and attacks, were rarely recorded, analysed or acted on. It was also found that the Government had inappropriately concentrated intelligence resources on the risk of ‘Islamist’ terrorism and had failed to adequately respond to concerns raised about threats against the Muslim community. In this light, the remarks of the sentencing judge can be seen to reflect a criminal justice system that, in the absence of other remedies, denies systemic accountability and renders hate crime an isolated incident rather than an expression of more widespread discrimination and hate to be holistically addressed. One of the recommendations of the Royal Commission was to explore with victims the desirability and design of restorative justice processes. This presents an opportunity for victims to meet with state representatives and pursue effective remedies (ICCPR art 2(3)) not only for the harm caused by the terrorist but the harm revealed by a system that has exposed the minority Muslim community in New Zealand to hate in all forms, including but not limited to violent extremism. In this sense, restorative justice can also assist the state in discharging its wider obligations to protect all persons from discrimination (art 26) and allow ethnic and religious minorities to enjoy their own culture and profess and practice their own religion (art 27). It can also help give effect to the law and its purpose as a remedy to hate crime, as expressed in this case study by the sentencing judge.

Keywords: hate crime, restorative justice, minorities, victims' rights

Procedia PDF Downloads 111
401 Statistical Optimization of Adsorption of a Harmful Dye from Aqueous Solution

Authors: M. Arun, A. Kannan

Abstract:

Textile industries cater to varied customer preferences and contribute substantially to the economy. However, these textile industries also produce a considerable amount of effluents. Prominent among these are the azo dyes which impart considerable color and toxicity even at low concentrations. Azo dyes are also used as coloring agents in food and pharmaceutical industry. Despite their applications, azo dyes are also notorious pollutants and carcinogens. Popular techniques like photo-degradation, biodegradation and the use of oxidizing agents are not applicable for all kinds of dyes, as most of them are stable to these techniques. Chemical coagulation produces a large amount of toxic sludge which is undesirable and is also ineffective towards a number of dyes. Most of the azo dyes are stable to UV-visible light irradiation and may even resist aerobic degradation. Adsorption has been the most preferred technique owing to its less cost, high capacity and process efficiency and the possibility of regenerating and recycling the adsorbent. Adsorption is also most preferred because it may produce high quality of the treated effluent and it is able to remove different kinds of dyes. However, the adsorption process is influenced by many variables whose inter-dependence makes it difficult to identify optimum conditions. The variables include stirring speed, temperature, initial concentration and adsorbent dosage. Further, the internal diffusional resistance inside the adsorbent particle leads to slow uptake of the solute within the adsorbent. Hence, it is necessary to identify optimum conditions that lead to high capacity and uptake rate of these pollutants. In this work, commercially available activated carbon was chosen as the adsorbent owing to its high surface area. A typical azo dye found in textile effluent waters, viz. the monoazo Acid Orange 10 dye (CAS: 1936-15-8) has been chosen as the representative pollutant. Adsorption studies were mainly focused at obtaining equilibrium and kinetic data for the batch adsorption process at different process conditions. Studies were conducted at different stirring speed, temperature, adsorbent dosage and initial dye concentration settings. The Full Factorial Design was the chosen statistical design framework for carrying out the experiments and identifying the important factors and their interactions. The optimum conditions identified from the experimental model were validated with actual experiments at the recommended settings. The equilibrium and kinetic data obtained were fitted to different models and the model parameters were estimated. This gives more details about the nature of adsorption taking place. Critical data required to design batch adsorption systems for removal of Acid Orange 10 dye and identification of factors that critically influence the separation efficiency are the key outcomes from this research.

Keywords: acid orange 10, activated carbon, optimum adsorption conditions, statistical design

Procedia PDF Downloads 171
400 Environmental Catalysts for Refining Technology Application: Reduction of CO Emission and Gasoline Sulphur in Fluid Catalytic Cracking Unit

Authors: Loganathan Kumaresan, Velusamy Chidambaram, Arumugam Velayutham Karthikeyani, Alex Cheru Pulikottil, Madhusudan Sau, Gurpreet Singh Kapur, Sankara Sri Venkata Ramakumar

Abstract:

Environmentally driven regulations throughout the world stipulate dramatic improvements in the quality of transportation fuels and refining operations. The exhaust gases like CO, NOx, and SOx from stationary sources (e.g., refinery) and motor vehicles contribute to a large extent for air pollution. The refining industry is under constant environmental pressure to achieve more rigorous standards on sulphur content in the fuel used in the transportation sector and other off-gas emissions. Fluid catalytic cracking unit (FCCU) is a major secondary process in refinery for gasoline and diesel production. CO-combustion promoter additive and gasoline sulphur reduction (GSR) additive are catalytic systems used in FCCU to assist the combustion of CO to CO₂ in the regenerator and regulate sulphur in gasoline faction respectively along with main FCC catalyst. Effectiveness of these catalysts is governed by the active metal used, its dispersion, the type of base material employed, and retention characteristics of additive in FCCU such as attrition resistance and density. The challenge is to have a high-density microsphere catalyst support for its retention and high activity of the active metals as these catalyst additives are used in low concentration compare to the main FCC catalyst. The present paper discusses in the first part development of high dense microsphere of nanocrystalline alumina by hydro-thermal method for CO combustion promoter application. Performance evaluation of additive was conducted under simulated regenerator conditions and shows CO combustion efficiency above 90%. The second part discusses the efficacy of a co-precipitation method for the generation of the active crystalline spinels of Zn, Mg, and Cu with aluminium oxides as an additive. The characterization and micro activity test using heavy combined hydrocarbon feedstock at FCC unit conditions for evaluating gasoline sulphur reduction activity are studied. These additives were characterized by X-Ray Diffraction, NH₃-TPD & N₂ sorption analysis, TPR analysis to establish structure-activity relationship. The reaction of sulphur removal mechanisms involving hydrogen transfer reaction, aromatization and alkylation functionalities are established to rank GSR additives for their activity, selectivity, and gasoline sulphur removal efficiency. The sulphur shifting in other liquid products such as heavy naphtha, light cycle oil, and clarified oil were also studied. PIONA analysis of liquid product reveals 20-40% reduction of sulphur in gasoline without compromising research octane number (RON) of gasoline and olefins content.

Keywords: hydrothermal, nanocrystalline, spinel, sulphur reduction

Procedia PDF Downloads 97