Search results for: storage mechanism
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5100

Search results for: storage mechanism

420 Litigating Innocence in the Era of Forensic Law: The Problem of Wrongful Convictions in the Absence of Effective Post-Conviction Remedies in South Africa

Authors: Tapiwa Shumba

Abstract:

The right to fairness and access to appeals and reviews enshrined under the South African Constitution seeks to ensure that justice is served. In essence, the constitution and the law have put in place mechanisms to ensure that a miscarriage of justice through wrongful convictions does not occur. However, once convicted and sentenced on appeal the procedural safeguards seem to resign as if to say, the accused has met his fate. The challenge with this construction is that even within an ideally perfect legal system wrongful convictions would still occur. Therefore, it is not so much of the failings of a legal system that demand attention but mechanisms to redress the results of such failings where evidence becomes available that a wrongful conviction occurred. In this context, this paper looks at the South African criminal procedural mechanisms for litigating innocence post-conviction. The discussion focuses on the role of section 327 of the South African Criminal Procedure Act and its apparent shortcomings in providing an avenue for victims of miscarriages to litigate their innocence by adducing new evidence at any stage during their wrongful incarceration. By looking at developments in other jurisdiction such as the United Kingdom, where South African criminal procedure draws much of its history, and the North Carolina example which in itself was inspired by the UK Criminal Cases Review Commission, this paper is able to make comparisons and draw invaluable lessons for the South African criminal justice system. Lessons from these foreign jurisdictions show that South African post-conviction criminal procedures need reform in line with constitutional values of human dignity, equality before the law, openness and transparency. The paper proposes an independent review of the current processes to assess the current post-conviction procedures under section 327. The review must look into the effectiveness of the current system and how it can be improved in line with new substantive legal provisions creating access to DNA evidence for post-conviction exonerations. Although the UK CCRC body should not be slavishly followed, its operations and the process leading to its establishment certainly provide a good point of reference and invaluable lessons for the South African criminal justice system seeing that South African law on this aspect has generally followed the English approach except that current provisions under section 327 are a mirror of the discredited system of the UK’s previous dispensation. A new independent mechanism that treats innocent victims of the criminal justice system with dignity away from the current political process is proposed to enable the South African criminal justice to benefit fully from recent and upcoming advances in science and technology.

Keywords: innocence, forensic law, post-conviction remedies, South African criminal justice system, wrongful conviction

Procedia PDF Downloads 236
419 Understanding the Reasons for Flooding in Chennai and Strategies for Making It Flood Resilient

Authors: Nivedhitha Venkatakrishnan

Abstract:

Flooding in urban areas in India has become a usual ritual phenomenon and a nightmare to most cities, which is a consequence of man-made disruption resulting in disaster. The City planning in India falls short of withstanding hydro generated disasters. This has become a barrier and challenge in the process of development put forth by urbanization, high population density, expanding informal settlements, environment degradation from uncollected and untreated waste that flows into natural drains and water bodies, this has disrupted the natural mechanism of hazard protection such as drainage channels, wetlands and floodplains. The magnitude and the impact of the mishap was high because of the failure of development policies, strategies, plans that the city had adopted. In the current scenario, cities are becoming the home for future, with economic diversification bringing in more investment into cities especially in domains of Urban infrastructure, planning and design. The uncertainty of the Urban futures in these low elevated coastal zones faces an unprecedented risk and threat. The study on focuses on three major pillars of resilience such as Recover, Resist and Restore. This process of getting ready to handle the situation bridges the gap between disaster response management and risk reduction requires a shift in paradigm. The study involved a qualitative research and a system design approach (framework). The initial stages involved mapping out of the urban water morphology with respect to the spatial growth gave an insight of the water bodies that have gone missing over the years during the process of urbanization. The major finding of the study was missing links between traditional water harvesting network was a major reason resulting in a manmade disaster. The research conceptualized the ideology of a sponge city framework which would guide the growth through institutional frameworks at different levels. The next stage was on understanding the implementation process at various stage to ensure the shift in paradigm. Demonstration of the concepts at a neighborhood level where, how, what are the functions and benefits of each component. Quantifying the design decision with rainwater harvest, surface runoff and how much water is collected and how it could be collected, stored and reused. The study came with further recommendation for Water Mitigation Spaces that will revive the traditional harvesting network.

Keywords: flooding, man made disaster, resilient city, traditional harvesting network, waterbodies

Procedia PDF Downloads 140
418 Incidence of Breast Cancer and Enterococcus Infection: A Retrospective Analysis

Authors: Matthew Cardeiro, Amalia D. Ardeljan, Lexi Frankel, Dianela Prado Escobar, Catalina Molnar, Omar M. Rashid

Abstract:

Introduction: Enterococci comprise the natural flora of nearly all animals and are ubiquitous in food manufacturing and probiotics. However, its role in the microbiome remains controversial. The gut microbiome has shown to play an important role in immunology and cancer. Further, recent data has suggested a relationship between gut microbiota and breast cancer. These studies have shown that the gut microbiome of patients with breast cancer differs from that of healthy patients. Research regarding enterococcus infection and its sequala is limited, and further research is needed in order to understand the relationship between infection and cancer. Enterococcus may prevent the development of breast cancer (BC) through complex immunologic and microbiotic adaptations following an enterococcus infection. This study investigated the effect of enterococcus infection and the incidence of BC. Methods: A retrospective study (January 2010- December 2019) was provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database and conducted using a Humans Health Insurance Database. International Classification of Disease (ICD) 9th and 10th codes, Current Procedural Terminology (CPT), and National Drug Codes were used to identify BC diagnosis and enterococcus infection. Patients were matched for age, sex, Charlson Comorbidity Index (CCI), antibiotic treatment, and region of residence. Chi-squared, logistic regression, and odds ratio were implemented to assess the significance and estimate relative risk. Results: 671 out of 28,518 (2.35%) patients with a prior enterococcus infection and 1,459 out of 28,518 (5.12%) patients without enterococcus infection subsequently developed BC, and the difference was statistically significant (p<2.2x10⁻¹⁶). Logistic regression also indicated enterococcus infection was associated with a decreased incidence of BC (RR=0.60, 95% CI [0.57, 0.63]). Treatment for enterococcus infection was analyzed and controlled for in both enterococcus infected and noninfected populations. 398 out of 11,523 (3.34%) patients with a prior enterococcus infection and treated with antibiotics were compared to 624 out of 11,523 (5.41%) patients with no history of enterococcus infection (control) and received antibiotic treatment. Both populations subsequently developed BC. Results remained statistically significant (p<2.2x10-16) with a relative risk of 0.57 (95% CI [0.54, 0.60]). Conclusion & Discussion: This study shows a statistically significant correlation between enterococcus infection and a decrease incidence of breast cancer. Further exploration is needed to identify and understand not only the role of enterococcus in the microbiome but also the protective mechanism(s) and impact enterococcus infection may have on breast cancer development. Ultimately, further research is needed in order to understand the complex and intricate relationship between the microbiome, immunology, bacterial infections, and carcinogenesis.

Keywords: breast cancer, enterococcus, immunology, infection, microbiome

Procedia PDF Downloads 173
417 Knowledge, Attitude and Beliefs Towards Polypharmacy Amongst Older People Attending Family Medicine Clinic at the Aga Khan University Hospital, Nairobi, Kenya (AKUHN) Sub-Saharan Africa-Qualitative Study

Authors: Maureen Kamau, Gulnaz Mohamoud, Adelaide Lusambili, Njeri Nyanja

Abstract:

Life expectancy has increased over the last century amongst older individuals, and in particular, those 60 years and over. The World Health Organization estimates that the world's population of persons over 60 years will rise to 22 per cent by the year 2050. Ageing is associated with increasing disability, multiple chronic conditions, and an increase in the use of health services. These multiple chronic conditions are managed with polypharmacy. Polypharmacy has numerous adverse effects including non-adherence, poor compliance to the various medications, reduced appetite, and risk of fall. Studies on polypharmacy and ageing are few and poorly understood especially in low and middle - income countries. The aim of this study was to explore the knowledge, attitudes and beliefs of older people towards polypharmacy. A qualitative study of 15 patients aged 60 years and above, taking more than five medications per day were conducted at the Aga Khan University using Semi-structured in-depth interviews. Three interviews were pilot interviews, and data analysis was performed on 12 interviews. Data were analyzed using NVIVO 12 software. A thematic qualitative analysis was carried out guided by Braun and Clarke (2006) framework. Themes identified; - knowledge of their co-morbidities and of the medication that older persons take, sources of information about medicines, and storage of the medication, experiences and attitudes of older patients towards polypharmacy both positive and negative, older peoples beliefs and their coping mechanisms with polypharmacy. The study participants had good knowledge on their multiple co-morbidities, and on the medication they took. The patients had positive attitudes towards medication as it enhanced their health and well-being, and enabled them to perform their activities of daily living. There was a strong belief among older patients that the medications were necessary for their health. All these factors enhanced compliance to the multiple medication. However, some older patients had negative attitudes due to the pill burden, side effects of the medication, and stigma associated with being ill. Cost of healthcare was a concern, with most of the patients interviewed relying on insurance to cover the cost of their medication. Older patients had accepted that the medication they were prescribed were necessary for their health, as it enabled them to complete their activities of daily living. Some concerns about the side effects of the medication arose, and brought about the need for patient education that would ensure that the patients are aware of the medications they take, and potential side effects. The effect that the COVID 19 pandemic had in the healthcare of the older patients was evident by the number of the older patients avoided coming to the hospital during the period of the pandemic. The relationship with the primary care physician and the older patients is an important one, especially in LMICs such as Kenya, as many of the older patients trusted the doctors wholeheartedly to make the best decision about their health and about their medication. Prescription review is important to avoid the use of potentially inappropriate medication.

Keywords: polypharmacy, older patients, multiple chronic conditions, Kenya, Africa, qualitative study, indepth interviews, primary care

Procedia PDF Downloads 98
416 Ergonomic Assessment of Workplace Environment of Flour Mill Workers

Authors: Jayshree P. Zend, Ashatai B. Pawar

Abstract:

The study was carried out in Parbhani district of Maharashtra state, India with the objectives to study environmental problems faced by flour mill workers, prevalence of work-related health hazards and the physiological cost of workers while performing work in flour mill in traditional method as well as improved method. The use of flour presser, dust controlling bag and noise and dust controlling mask developed by AICRP College of Home Science, VNMKV, Parbhani was considered as an improved method. This investigation consisted survey and experiment which was conducted in the respective locations of flour mills. Healthy, non-smoking 30 flour mill workers ranged between the age group of 20-50 yrs comprising 16 female and 14 male working at flour mill for 4-8 hrs/ day and 6 days/ week and had minimum five years experience of work in flour mill were selected for the study. Pulmonary function test of flour mill workers was carried out by trained technician at Dr. ShankarraoChavan Government Medical College, Nanded by using Electronic Spirometer. The data regarding heart rate (resting, working and recovery), energy expenditure, musculoskeletal problems and occupational health hazards and accidents were recorded by using pretested questionnaire. Scientific equipment used in the experiment were polar sport test heart rate monitor, Hygrometer, Goniometer, Dialed Thermometer, Sound Level Meter, Lux Meter, Ambient Air Sampler and Air Quality Monitor. The collected data were subjected to appropriate statistical analysis such as 't' test and correlation coefficient test. Results indicated that improved method i.e. use of noise and dust controlling mask, flour presser and dust controlling bag were effective in reducing physiological cost of work of flour mill workers. Lung function test of flour mill workers showed decreased values of all parameters, hence the results of present study support paying attention to use of personal protective noise and dust controlling mask by flour mill workers and also to the working conditions in flour mill especially ventilation and illumination level needs to be enhanced in flour mill. The study also emphasizes the need to develop some mechanism for lifting load of grains and unloading in the hopper. It is also suggested that the flour mill workers should use flour presser suitable to their height to avoid frequent bending and should use dust controlling bag to flour outlet of machine to reduce inhalable flour dust level in the flour mill.

Keywords: physiological cost, energy expenditure, musculoskeletal problems

Procedia PDF Downloads 401
415 Freight Time and Cost Optimization in Complex Logistics Networks, Using a Dimensional Reduction Method and K-Means Algorithm

Authors: Egemen Sert, Leila Hedayatifar, Rachel A. Rigg, Amir Akhavan, Olha Buchel, Dominic Elias Saadi, Aabir Abubaker Kar, Alfredo J. Morales, Yaneer Bar-Yam

Abstract:

The complexity of providing timely and cost-effective distribution of finished goods from industrial facilities to customers makes effective operational coordination difficult, yet effectiveness is crucial for maintaining customer service levels and sustaining a business. Logistics planning becomes increasingly complex with growing numbers of customers, varied geographical locations, the uncertainty of future orders, and sometimes extreme competitive pressure to reduce inventory costs. Linear optimization methods become cumbersome or intractable due to a large number of variables and nonlinear dependencies involved. Here we develop a complex systems approach to optimizing logistics networks based upon dimensional reduction methods and apply our approach to a case study of a manufacturing company. In order to characterize the complexity in customer behavior, we define a “customer space” in which individual customer behavior is described by only the two most relevant dimensions: the distance to production facilities over current transportation routes and the customer's demand frequency. These dimensions provide essential insight into the domain of effective strategies for customers; direct and indirect strategies. In the direct strategy, goods are sent to the customer directly from a production facility using box or bulk trucks. In the indirect strategy, in advance of an order by the customer, goods are shipped to an external warehouse near a customer using trains and then "last-mile" shipped by trucks when orders are placed. Each strategy applies to an area of the customer space with an indeterminate boundary between them. Specific company policies determine the location of the boundary generally. We then identify the optimal delivery strategy for each customer by constructing a detailed model of costs of transportation and temporary storage in a set of specified external warehouses. Customer spaces help give an aggregate view of customer behaviors and characteristics. They allow policymakers to compare customers and develop strategies based on the aggregate behavior of the system as a whole. In addition to optimization over existing facilities, using customer logistics and the k-means algorithm, we propose additional warehouse locations. We apply these methods to a medium-sized American manufacturing company with a particular logistics network, consisting of multiple production facilities, external warehouses, and customers along with three types of shipment methods (box truck, bulk truck and train). For the case study, our method forecasts 10.5% savings on yearly transportation costs and an additional 4.6% savings with three new warehouses.

Keywords: logistics network optimization, direct and indirect strategies, K-means algorithm, dimensional reduction

Procedia PDF Downloads 139
414 A 1T1R Nonvolatile Memory with Al/TiO₂/Au and Sol-Gel Processed Barium Zirconate Nickelate Gate in Pentacene Thin Film Transistor

Authors: Ke-Jing Lee, Cheng-Jung Lee, Yu-Chi Chang, Li-Wen Wang, Yeong-Her Wang

Abstract:

To avoid the cross-talk issue of only resistive random access memory (RRAM) cell, one transistor and one resistor (1T1R) architecture with a TiO₂-based RRAM cell connected with solution barium zirconate nickelate (BZN) organic thin film transistor (OTFT) device is successfully demonstrated. The OTFT were fabricated on a glass substrate. Aluminum (Al) as the gate electrode was deposited via a radio-frequency (RF) magnetron sputtering system. The barium acetate, zirconium n-propoxide, and nickel II acetylacetone were synthesized by using the sol-gel method. After the BZN solution was completely prepared using the sol-gel process, it was spin-coated onto the Al/glass substrate as the gate dielectric. The BZN layer was baked at 100 °C for 10 minutes under ambient air conditions. The pentacene thin film was thermally evaporated on the BZN layer at a deposition rate of 0.08 to 0.15 nm/s. Finally, gold (Au) electrode was deposited using an RF magnetron sputtering system and defined through shadow masks as both the source and drain. The channel length and width of the transistors were 150 and 1500 μm, respectively. As for the manufacture of 1T1R configuration, the RRAM device was fabricated directly on drain electrodes of TFT device. A simple metal/insulator/metal structure, which consisting of Al/TiO₂/Au structures, was fabricated. First, Au was deposited to be a bottom electrode of RRAM device by RF magnetron sputtering system. Then, the TiO₂ layer was deposited on Au electrode by sputtering. Finally, Al was deposited as the top electrode. The electrical performance of the BZN OTFT was studied, showing superior transfer characteristics with the low threshold voltage of −1.1 V, good saturation mobility of 5 cm²/V s, and low subthreshold swing of 400 mV/decade. The integration of the BZN OTFT and TiO₂ RRAM devices was finally completed to form 1T1R configuration with low power consumption of 1.3 μW, the low operation current of 0.5 μA, and reliable data retention. Based on the I-V characteristics, the different polarities of bipolar switching are found to be determined by the compliance current with the different distribution of the internal oxygen vacancies used in the RRAM and 1T1R devices. Also, this phenomenon can be well explained by the proposed mechanism model. It is promising to make the 1T1R possible for practical applications of low-power active matrix flat-panel displays.

Keywords: one transistor and one resistor (1T1R), organic thin-film transistor (OTFT), resistive random access memory (RRAM), sol-gel

Procedia PDF Downloads 354
413 A Concept in Addressing the Singularity of the Emerging Universe

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation

Procedia PDF Downloads 89
412 Post Harvest Fungi Diversity and Level of Aflatoxin Contamination in Stored Maize: Cases of Kitui, Nakuru and Trans-Nzoia Counties in Kenya

Authors: Gachara Grace, Kebira Anthony, Harvey Jagger, Wainaina James

Abstract:

Aflatoxin contamination of maize in Africa poses a major threat to food security and the health of many African people. In Kenya, aflatoxin contamination of maize is high due to the environmental, agricultural and socio-economic factors. Many studies have been conducted to understand the scope of the problem, especially at pre-harvest level. This research was carried out to gather scientific information on the fungi population, diversity and aflatoxin level during the post-harvest period. The study was conducted in three geographical locations of; Kitui, Kitale and Nakuru. Samples were collected from storage structures of farmers and transported to the Biosciences eastern and central Africa (BecA), International Livestock and Research Institute (ILRI) hub laboratories. Mycoflora was recovered using the direct plating method. A total of five fungal genera (Aspergillus, Penicillium, Fusarium, Rhizopus and Bssyochlamys spp.) were isolated from the stored maize samples. The most common fungal species that were isolated from the three study sites included A. flavus at 82.03% followed by A.niger and F.solani at 49% and 26% respectively. The aflatoxin producing fungi A. flavus was recovered in 82.03% of the samples. Aflatoxin levels were analysed on both the maize samples and in vitro. Most of the A. flavus isolates recorded a high level of aflatoxin when they were analysed for presence of aflatoxin B1 using ELISA. In Kitui, all the samples (100%) had aflatoxin levels above 10ppb with a total aflatoxin mean of 219.2ppb. In Kitale, only 3 samples (n=39) had their aflatoxin levels less than 10ppb while in Nakuru, the total aflatoxin mean level of this region was 239.7ppb. When individual samples were analysed using Vicam fluorometer method, aflatoxin analysis revealed that most of the samples (58.4%) had been contaminated. The means were significantly different (p=0.00<0.05) in all the three locations. Genetic relationships of A. flavus isolates were determined using 13 Simple Sequence Repeats (SSRs) markers. The results were used to generate a phylogenetic tree using DARwin5 software program. A total of 5 distinct clusters were revealed among the genotypes. The isolates appeared to cluster separately according to the geographical locations. Principal Coordinates Analysis (PCoA) of the genetic distances among the 91 A. flavus isolates explained over 50.3% of the total variation when two coordinates were used to cluster the isolates. Analysis of Molecular Variance (AMOVA) showed a high variation of 87% within populations and 13% among populations. This research has shown that A. flavus is the main fungal species infecting maize grains in Kenya. The influence of aflatoxins on human populations in Kenya demonstrates a clear need for tools to manage contamination of locally produced maize. Food basket surveys for aflatoxin contamination should be conducted on a regular basis. This would assist in obtaining reliable data on aflatoxin incidence in different food crops. This would go a long way in defining control strategies for this menace.

Keywords: aflatoxin, Aspergillus flavus, genotyping, Kenya

Procedia PDF Downloads 277
411 Neuronal Mechanisms of Observational Motor Learning in Mice

Authors: Yi Li, Yinan Zheng, Ya Ke, Yungwing Ho

Abstract:

Motor learning is a process that frequently happens among humans and rodents, which is defined as the changes in the capability to perform a skill that is conformed to have a relatively permanent improvement through practice or experience. There are many ways to learn a behavior, among which is observational learning. Observational learning is the process of learning by watching the behaviors of others, for example, a child imitating parents, learning a new sport by watching the training videos or solving puzzles by watching the solutions. Many research explores observational learning in humans and primates. However, the neuronal mechanism of which, especially observational motor learning, was uncertain. It’s well accepted that mirror neurons are essential in the observational learning process. These neurons fire when the primate performs a goal-directed action and sees someone else demonstrating the same action, which suggests they have high firing activity both completing and watching the behavior. The mirror neurons are assumed to mediate imitation or play a critical and fundamental role in action understanding. They are distributed in many brain areas of primates, i.e., posterior parietal cortex (PPC), premotor cortex (M2), and primary motor cortex (M1) of the macaque brain. However, few researchers report the existence of mirror neurons in rodents. To verify the existence of mirror neurons and the possible role in motor learning in rodents, we performed customised string-pulling behavior combined with multiple behavior analysis methods, photometry, electrophysiology recording, c-fos staining and optogenetics in healthy mice. After five days of training, the demonstrator (demo) mice showed a significantly quicker response and shorter time to reach the string; fast, steady and accurate performance to pull down the string; and more precisely grasping the beads. During three days of observation, the mice showed more facial motions when the demo mice performed behaviors. On the first training day, the observer reduced the number of trials to find and pull the string. However, the time to find beads and pull down string were unchanged in the successful attempts on the first day and other training days, which indicated successful action understanding but failed motor learning through observation in mice. After observation, the post-hoc staining revealed that the c-fos expression was increased in the cognitive-related brain areas (medial prefrontal cortex) and motor cortices (M1, M2). In conclusion, this project indicated that the observation led to a better understanding of behaviors and activated the cognitive and motor-related brain areas, which suggested the possible existence of mirror neurons in these brain areas.

Keywords: observation, motor learning, string-pulling behavior, prefrontal cortex, motor cortex, cognitive

Procedia PDF Downloads 88
410 Prediction of Terrorist Activities in Nigeria using Bayesian Neural Network with Heterogeneous Transfer Functions

Authors: Tayo P. Ogundunmade, Adedayo A. Adepoju

Abstract:

Terrorist attacks in liberal democracies bring about a few pessimistic results, for example, sabotaged public support in the governments they target, disturbing the peace of a protected environment underwritten by the state, and a limitation of individuals from adding to the advancement of the country, among others. Hence, seeking for techniques to understand the different factors involved in terrorism and how to deal with those factors in order to completely stop or reduce terrorist activities is the topmost priority of the government in every country. This research aim is to develop an efficient deep learning-based predictive model for the prediction of future terrorist activities in Nigeria, addressing low-quality prediction accuracy problems associated with the existing solution methods. The proposed predictive AI-based model as a counterterrorism tool will be useful by governments and law enforcement agencies to protect the lives of individuals in society and to improve the quality of life in general. A Heterogeneous Bayesian Neural Network (HETBNN) model was derived with Gaussian error normal distribution. Three primary transfer functions (HOTTFs), as well as two derived transfer functions (HETTFs) arising from the convolution of the HOTTFs, are namely; Symmetric Saturated Linear transfer function (SATLINS ), Hyperbolic Tangent transfer function (TANH), Hyperbolic Tangent sigmoid transfer function (TANSIG), Symmetric Saturated Linear and Hyperbolic Tangent transfer function (SATLINS-TANH) and Symmetric Saturated Linear and Hyperbolic Tangent Sigmoid transfer function (SATLINS-TANSIG). Data on the Terrorist activities in Nigeria gathered through questionnaires for the purpose of this study were used. Mean Square Error (MSE), Mean Absolute Error (MAE) and Test Error are the forecast prediction criteria. The results showed that the HETFs performed better in terms of prediction and factors associated with terrorist activities in Nigeria were determined. The proposed predictive deep learning-based model will be useful to governments and law enforcement agencies as an effective counterterrorism mechanism to understand the parameters of terrorism and to design strategies to deal with terrorism before an incident actually happens and potentially causes the loss of precious lives. The proposed predictive AI-based model will reduce the chances of terrorist activities and is particularly helpful for security agencies to predict future terrorist activities.

Keywords: activation functions, Bayesian neural network, mean square error, test error, terrorism

Procedia PDF Downloads 165
409 Aesthetics and Semiotics in Theatre Performance

Authors: Păcurar Diana Istina

Abstract:

Structured in three chapters, the article attempts an X-ray of the theatrical aesthetics, correctly understood through the emotions generated in the intimate structure of the spectator that precedes the triggering of the viewer’s perception and not through the superposition, unfortunately common, of the notion of aesthetics with the style in which a theater show is built. The first chapter contains a brief history of the appearance of the word aesthetic, the formulation of definitions for this new term, as well as its connections with the notions of semiotics, in particular with the perception of the message transmitted. Starting with Aristotle and Plato, and reaching Magritte, their interventions should not be interpreted in the sense that the two scientific concepts can merge into one discipline. The perception that is the object of everyone’s analysis, the understanding of meaning, the decoding of the messages sent, and the triggering of feelings that culminate in pleasure, shaping the aesthetic vision, are some elements that keep semiotics and aesthetics distinct, even though they share many methods of analysis. The compositional processes of aesthetic representation and symbolic formation are analyzed in the second part of the paper from perspectives that include or do not include historical, cultural, social, and political processes. Aesthetics and the organization of its symbolic process are treated, taking into account expressive activity. The last part of the article explores the notion of aesthetics in applied theater, more specifically in the theater show. Taking the postmodern approach that aesthetics applies to the creation of an artifact and the reception of that artifact, the intervention of these elements in the theatrical system must be emphasized –that is, the analysis of the problems arising in the stages of the creation, presentation, and reception, by the public, of the theater performance. The aesthetic process is triggered involuntarily, simultaneously, or before the moment when people perceive the meaning of the messages transmitted by the work of art. The finding of this fact makes the mental process of aesthetics similar or related to that of semiotics. No matter how perceived individually, beauty, the mechanism of production can be reduced to two. The first step presents similarities to Peirce’s model, but the process between signified and signified additionally stimulates the related memory of the evaluation of beauty, adding to the meanings related to the signification itself. Then, the second step, a process of comparison, is followed, in which one examines whether the object being looked at matches the accumulated memory of beauty. Therefore, even though aesthetics is derived from the conceptual part, the judgment of beauty and, more than that, moral judgment come to be so important to the social activities of human beings that it evolves as a visible process independent of other conceptual contents.

Keywords: aesthetics, semiotics, symbolic composition, subjective joints, signifying, signified

Procedia PDF Downloads 109
408 The Role of Supply Chain Agility in Improving Manufacturing Resilience

Authors: Maryam Ziaee

Abstract:

This research proposes a new approach and provides an opportunity for manufacturing companies to produce large amounts of products that meet their prospective customers’ tastes, needs, and expectations and simultaneously enable manufacturers to increase their profit. Mass customization is the production of products or services to meet each individual customer’s desires to the greatest possible extent in high quantities and at reasonable prices. This process takes place at different levels such as the customization of goods’ design, assembly, sale, and delivery status, and classifies in several categories. The main focus of this study is on one class of mass customization, called optional customization, in which companies try to provide their customers with as many options as possible to customize their products. These options could range from the design phase to the manufacturing phase, or even methods of delivery. Mass customization values customers’ tastes, but it is only one side of clients’ satisfaction; on the other side is companies’ fast responsiveness delivery. It brings the concept of agility, which is the ability of a company to respond rapidly to changes in volatile markets in terms of volume and variety. Indeed, mass customization is not effectively feasible without integrating the concept of agility. To gain the customers’ satisfaction, the companies need to be quick in responding to their customers’ demands, thus highlighting the significance of agility. This research offers a different method that successfully integrates mass customization and fast production in manufacturing industries. This research is built upon the hypothesis that the success key to being agile in mass customization is to forecast demand, cooperate with suppliers, and control inventory. Therefore, the significance of the supply chain (SC) is more pertinent when it comes to this stage. Since SC behavior is dynamic and its behavior changes constantly, companies have to apply one of the predicting techniques to identify the changes associated with SC behavior to be able to respond properly to any unwelcome events. System dynamics utilized in this research is a simulation approach to provide a mathematical model among different variables to understand, control, and forecast SC behavior. The final stage is delayed differentiation, the production strategy considered in this research. In this approach, the main platform of products is produced and stocked and when the company receives an order from a customer, a specific customized feature is assigned to this platform and the customized products will be created. The main research question is to what extent applying system dynamics for the prediction of SC behavior improves the agility of mass customization. This research is built upon a qualitative approach to bring about richer, deeper, and more revealing results. The data is collected through interviews and is analyzed through NVivo software. This proposed model offers numerous benefits such as reduction in the number of product inventories and their storage costs, improvement in the resilience of companies’ responses to their clients’ needs and tastes, the increase of profits, and the optimization of productivity with the minimum level of lost sales.

Keywords: agility, manufacturing, resilience, supply chain

Procedia PDF Downloads 89
407 Assessment of On-Site Solar and Wind Energy at a Manufacturing Facility in Ireland

Authors: A. Sgobba, C. Meskell

Abstract:

The feasibility of on-site electricity production from solar and wind and the resulting load management for a specific manufacturing plant in Ireland are assessed. The industry sector accounts directly and indirectly for a high percentage of electricity consumption and global greenhouse gas emissions; therefore, it will play a key role in emission reduction and control. Manufacturing plants, in particular, are often located in non-residential areas since they require open spaces for production machinery, parking facilities for the employees, appropriate routes for supply and delivery, special connections to the national grid and other environmental impacts. Since they have larger spaces compared to commercial sites in urban areas, they represent an appropriate case study for evaluating the technical and economic viability of energy system integration with low power density technologies, such as solar and wind, for on-site electricity generation. The available open space surrounding the analysed manufacturing plant can be efficiently used to produce a discrete quantity of energy, instantaneously and locally consumed. Therefore, transmission and distribution losses can be reduced. The usage of storage is not required due to the high and almost constant electricity consumption profile. The energy load of the plant is identified through the analysis of gas and electricity consumption, both internally monitored and reported on the bills. These data are not often recorded and available to third parties since manufacturing companies usually keep track only of the overall energy expenditures. The solar potential is modelled for a period of 21 years based on global horizontal irradiation data; the hourly direct and diffuse radiation and the energy produced by the system at the optimum pitch angle are calculated. The model is validated using PVWatts and SAM tools. Wind speed data are available for the same period within one-hour step at a height of 10m. Since the hub of a typical wind turbine reaches a higher altitude, complementary data for a different location at 50m have been compared, and a model for the estimate of wind speed at the required height in the right location is defined. Weibull Statistical Distribution is used to evaluate the wind energy potential of the site. The results show that solar and wind energy are, as expected, generally decoupled. Based on the real case study, the percentage of load covered every hour by on-site generation (Level of Autonomy LA) and the resulting electricity bought from the grid (Expected Energy Not Supplied EENS) are calculated. The economic viability of the project is assessed through Net Present Value, and the influence the main technical and economic parameters have on NPV is presented. Since the results show that the analysed renewable sources can not provide enough electricity, the integration with a cogeneration technology is studied. Finally, the benefit to energy system integration of wind, solar and a cogeneration technology is evaluated and discussed.

Keywords: demand, energy system integration, load, manufacturing, national grid, renewable energy sources

Procedia PDF Downloads 129
406 The Use of Empirical Models to Estimate Soil Erosion in Arid Ecosystems and the Importance of Native Vegetation

Authors: Meshal M. Abdullah, Rusty A. Feagin, Layla Musawi

Abstract:

When humans mismanage arid landscapes, soil erosion can become a primary mechanism that leads to desertification. This study focuses on applying soil erosion models to a disturbed landscape in Umm Nigga, Kuwait, and identifying its predicted change under restoration plans, The northern portion of Umm Nigga, containing both coastal and desert ecosystems, falls within the boundaries of the Demilitarized Zone (DMZ) adjacent to Iraq, and has been fenced off to restrict public access since 1994. The central objective of this project was to utilize GIS and remote sensing to compare the MPSIAC (Modified Pacific South West Inter Agency Committee), EMP (Erosion Potential Method), and USLE (Universal Soil Loss Equation) soil erosion models and determine their applicability for arid regions such as Kuwait. Spatial analysis was used to develop the necessary datasets for factors such as soil characteristics, vegetation cover, runoff, climate, and topography. Results showed that the MPSIAC and EMP models produced a similar spatial distribution of erosion, though the MPSIAC had more variability. For the MPSIAC model, approximately 45% of the land surface ranged from moderate to high soil loss, while 35% ranged from moderate to high for the EMP model. The USLE model had contrasting results and a different spatial distribution of the soil loss, with 25% of area ranging from moderate to high erosion, and 75% ranging from low to very low. We concluded that MPSIAC and EMP were the most suitable models for arid regions in general, with the MPSIAC model best. We then applied the MPSIAC model to identify the amount of soil loss between coastal and desert areas, and fenced and unfenced sites. In the desert area, soil loss was different between fenced and unfenced sites. In these desert fenced sites, 88% of the surface was covered with vegetation and soil loss was very low, while at the desert unfenced sites it was 3% and correspondingly higher. In the coastal areas, the amount of soil loss was nearly similar between fenced and unfenced sites. These results implied that vegetation cover played an important role in reducing soil erosion, and that fencing is much more important in the desert ecosystems to protect against overgrazing. When applying the MPSIAC model predictively, we found that vegetation cover could be increased from 3% to 37% in unfenced areas, and soil erosion could then decrease by 39%. We conclude that the MPSIAC model is best to predict soil erosion for arid regions such as Kuwait.

Keywords: soil erosion, GIS, modified pacific South west inter agency committee model (MPSIAC), erosion potential method (EMP), Universal soil loss equation (USLE)

Procedia PDF Downloads 297
405 Radiation Induced DNA Damage and Its Modification by Herbal Preparation of Hippophae rhamnoides L. (SBL-1): An in vitro and in vivo Study in Mice

Authors: Anuranjani Kumar, Madhu Bala

Abstract:

Ionising radiation exposure induces generation of free radicals and the oxidative DNA damage. SBL-1, a radioprotective leaf extract prepared from leaves Hippophae rhamnoides L. (Common name; Seabuckthorn), showed > 90% survival in mice population that was treated with lethal dose (10 Gy) of ⁶⁰Co gamma irradiation. In this study, early effects of pre-treatment with or without SBL-1 in blood peripheral blood lymphocytes (PBMCs) were investigated by cell viability assays (trypan blue and MTT). The quantitative in vitro study of Hoescht/PI staining was performed to check the apoptosis/necrosis in PBMCs irradiated at 2 Gy with or without pretreatment of SBL-1 (at different concentrations) up to 24 and 48h. Comet assay was performed in vivo, to detect the DNA strands breaks and its repair mechanism on peripheral blood lymphocytes at lethal dose (10 Gy). For this study, male mice (wt. 28 ± 2g) were administered radioprotective dose (30mg/kg body weight) of SBL-1, 30 min prior to irradiation. Animals were sacrificed at 24h and 48h. Blood was drawn through cardiac puncture, and blood lymphocytes were separated using histopaque column. Both neutral and alkaline comet assay were performed using standardized technique. In irradiated animals, alkaline comet assay revealed single strand breaks (SSBs) that showed significant (p < 0.05) increase in percent DNA in tail and Olive tail moment (OTM) at 24 h while at 48h the percent DNA in tail further increased significantly (p < 0.02). The double strands breaks (DSBs) increased significantly (p < 0.01) at 48 h in neutral assay, in comparison to untreated control. The animals pre-treated with SBL-1 before irradiation showed significantly (p < 0.05) less DSBs at 48 h treatment in comparison to irradiated group of animals. The SBL-1 alone treated group itself showed no toxicity. The antioxidant potential of SBL-1 were also investigated by in vitro biochemical assays such as DPPH (p < 0.05), ABTS, reducing ability (p < 0.09), hydroxyl radical scavenging (p < 0.05), ferric reducing antioxidant power (FRAP), superoxide radical scavenging activity (p < 0.05), hydrogen peroxide scavenging activity (p < 0.05) etc. SBL-1 showed strong free radical scavenging power that plays important role in the studies of radiation-induced injuries. The SBL-1 treated PBMCs showed significant (p < 0.02) viability in trypan blue assay at 24-hour incubation.

Keywords: radiation, SBL-1, SSBs, DSBs, FRAP, PBMCs

Procedia PDF Downloads 154
404 High Throughput LC-MS/MS Studies on Sperm Proteome of Malnad Gidda (Bos Indicus) Cattle

Authors: Kerekoppa Puttaiah Bhatta Ramesha, Uday Kannegundla, Praseeda Mol, Lathika Gopalakrishnan, Jagish Kour Reen, Gourav Dey, Manish Kumar, Sakthivel Jeyakumar, Arumugam Kumaresan, Kiran Kumar M., Thottethodi Subrahmanya Keshava Prasad

Abstract:

Spermatozoa are the highly specialized transcriptionally and translationally inactive haploid male gamete. The understanding of proteome of sperm is indispensable to explore the mechanism of sperm motility and fertility. Though there is a large number of human sperm proteomic studies, in-depth proteomic information on Bos indicus spermatozoa is not well established yet. Therefore, we illustrated the profile of sperm proteome in indigenous cattle, Malnad gidda (Bos Indicus), using high-resolution mass spectrometry. In the current study, two semen ejaculates from 3 breeding bulls were collected employing the artificial vaginal method. Using 45% percoll purification, spermatozoa cells were isolated. Protein was extracted using lysis buffer containing 2% Sodium Dodecyl Sulphate (SDS) and protein concentration was estimated. Fifty micrograms of protein from each individual were pooled for further downstream processing. Pooled sample was fractionated using SDS-Poly Acrylamide Gel Electrophoresis, which is followed by in-gel digestion. The peptides were subjected to C18 Stage Tip clean-up and analyzed in Orbitrap Fusion Tribrid mass spectrometer interfaced with Proxeon Easy-nano LC II system (Thermo Scientific, Bremen, Germany). We identified a total of 6773 peptides with 28426 peptide spectral matches, which belonged to 1081 proteins. Gene ontology analysis has been carried out to determine the biological processes, molecular functions and cellular components associated with sperm protein. The biological process chiefly represented our data is an oxidation-reduction process (5%), spermatogenesis (2.5%) and spermatid development (1.4%). The highlighted molecular functions are ATP, and GTP binding (14%) and the prominent cellular components most observed in our data were nuclear membrane (1.5%), acrosomal vesicle (1.4%), and motile cilium (1.3%). Seventeen percent of sperm proteins identified in this study were involved in metabolic pathways. To the best of our knowledge, this data represents the first total sperm proteome from indigenous cattle, Malnad Gidda. We believe that our preliminary findings could provide a strong base for the future understanding of bovine sperm proteomics.

Keywords: Bos indicus, Malnad Gidda, mass spectrometry, spermatozoa

Procedia PDF Downloads 196
403 Assessing of Social Comfort of the Russian Population with Big Data

Authors: Marina Shakleina, Konstantin Shaklein, Stanislav Yakiro

Abstract:

The digitalization of modern human life over the last decade has facilitated the acquisition, storage, and processing of data, which are used to detect changes in consumer preferences and to improve the internal efficiency of the production process. This emerging trend has attracted academic interest in the use of big data in research. The study focuses on modeling the social comfort of the Russian population for the period 2010-2021 using big data. Big data provides enormous opportunities for understanding human interactions at the scale of society with plenty of space and time dynamics. One of the most popular big data sources is Google Trends. The methodology for assessing social comfort using big data involves several steps: 1. 574 words were selected based on the Harvard IV-4 Dictionary adjusted to fit the reality of everyday Russian life. The set of keywords was further cleansed by excluding queries consisting of verbs and words with several lexical meanings. 2. Search queries were processed to ensure comparability of results: the transformation of data to a 10-point scale, elimination of popularity peaks, detrending, and deseasoning. The proposed methodology for keyword search and Google Trends processing was implemented in the form of a script in the Python programming language. 3. Block and summary integral indicators of social comfort were constructed using the first modified principal component resulting in weighting coefficients values of block components. According to the study, social comfort is described by 12 blocks: ‘health’, ‘education’, ‘social support’, ‘financial situation’, ‘employment’, ‘housing’, ‘ethical norms’, ‘security’, ‘political stability’, ‘leisure’, ‘environment’, ‘infrastructure’. According to the model, the summary integral indicator increased by 54% and was 4.631 points; the average annual rate was 3.6%, which is higher than the rate of economic growth by 2.7 p.p. The value of the indicator describing social comfort in Russia is determined by 26% by ‘social support’, 24% by ‘education’, 12% by ‘infrastructure’, 10% by ‘leisure’, and the remaining 28% by others. Among 25% of the most popular searches, 85% are of negative nature and are mainly related to the blocks ‘security’, ‘political stability’, ‘health’, for example, ‘crime rate’, ‘vulnerability’. Among the 25% most unpopular queries, 99% of the queries were positive and mostly related to the blocks ‘ethical norms’, ‘education’, ‘employment’, for example, ‘social package’, ‘recycling’. In conclusion, the introduction of the latent category ‘social comfort’ into the scientific vocabulary deepens the theory of the quality of life of the population in terms of the study of the involvement of an individual in the society and expanding the subjective aspect of the measurements of various indicators. Integral assessment of social comfort demonstrates the overall picture of the development of the phenomenon over time and space and quantitatively evaluates ongoing socio-economic policy. The application of big data in the assessment of latent categories gives stable results, which opens up possibilities for their practical implementation.

Keywords: big data, Google trends, integral indicator, social comfort

Procedia PDF Downloads 200
402 In Support of Sustainable Water Resources Development in the Lower Mekong River Basin: Development of Guidelines for Transboundary Environmental Impact Assessment

Authors: Kongmeng Ly

Abstract:

The management of transboundary river basins across developing countries, such as the Lower Mekong River Basin (LMB), is frequently challenging given the development and conservation divergences of the basin countries. Driven by needs to sustain economic performance and reduce poverty, the LMB countries (Cambodia, Lao PDR, Thailand, Viet Nam) are embarking on significant land use changes in the form hydropower dam, to fulfill their energy requirements. This pathway could lead to irreversible changes to the ecosystem of the Mekong River, if not properly managed. Given the uncertain trade-offs of hydropower development and operation, the Lower Mekong River Basin Countries through the technical support of the Mekong River Commission (MRC) Secretariat embarked on decade long the development of Technical Guidelines for Transboundary Environmental Impact Assessment. Through a series of workshops, seminars, national and regional consultations, and pilot studies and further development following the recommendations generated through legal and institutional reviews undertaken over two decades period, the LMB Countries jointly adopted the MRC Technical Guidelines for Transboundary Environmental Impact Assessment (TbEIA Guidelines). These guidelines were developed with particular regard to the experience gained from MRC supported consultations and technical reviews of the Xayaburi Dam Project, Don Sahong Hydropower Project, Pak Beng Hydropower Project, and lessons learned from the Srepok River and Se San River case studies commissioned by the MRC under the generous supports of development partners around the globe. As adopted, the TbEIA Guidelines have been designed as a supporting mechanism to the national EIA legislation, processes and systems in each Member Country. In recognition of the already agreed mechanisms, the TbEIA Guidelines build on and supplement the agreements stipulated in the 1995 Agreement on the Cooperation for the Sustainable Development of the Mekong River Basin and its Procedural Rules, in addressing potential transboundary environmental impacts of development projects and ensuring mutual benefits from the Mekong River and its resources. Since its adoption in 2022, the TbEIA Guidelines have already been voluntary implemented by Lao PDR on its underdevelopment Sekong A Downstream Hydropower Project, located on the Sekong River – a major tributary of the Mekong River. While this implementation is ongoing with results expected in early 2024, the implementation thus far has strengthened cooperation among concerned Member Countries with multiple successful open dialogues organized at national and regional levels. It is hope that lessons learnt from this application would lead to a wider application of the TbEIA Guidelines for future water resources development projects in the LMB.

Keywords: transboundary, EIA, lower mekong river basin, mekong river

Procedia PDF Downloads 37
401 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 87
400 Numerical Investigation of Multiphase Flow Structure for the Flue Gas Desulfurization

Authors: Cheng-Jui Li, Chien-Chou Tseng

Abstract:

This study adopts Computational Fluid Dynamics (CFD) technique to build the multiphase flow numerical model where the interface between the flue gas and desulfurization liquid can be traced by Eulerian-Eulerian model. Inside the tower, the contact of the desulfurization liquid flow from the spray nozzles and flue gas flow can trigger chemical reactions to remove the sulfur dioxide from the exhaust gas. From experimental observations of the industrial scale plant, the desulfurization mechanism depends on the mixing level between the flue gas and the desulfurization liquid. In order to significantly improve the desulfurization efficiency, the mixing efficiency and the residence time can be increased by perforated sieve trays. Hence, the purpose of this research is to investigate the flow structure of sieve trays for the flue gas desulfurization by numerical simulation. In this study, there is an outlet at the top of FGD tower to discharge the clean gas and the FGD tower has a deep tank at the bottom, which is used to collect the slurry liquid. In the major desulfurization zone, the desulfurization liquid and flue gas have a complex mixing flow. Because there are four perforated plates in the major desulfurization zone, which spaced 0.4m from each other, and the spray array is placed above the top sieve tray, which includes 33 nozzles. Each nozzle injects desulfurization liquid that consists of the Mg(OH)2 solution. On each sieve tray, the outside diameter, the hole diameter, and the porosity are 0.6m, 20 mm and 34.3%. The flue gas flows into the FGD tower from the space between the major desulfurization zone and the deep tank can finally become clean. The desulfurization liquid and the liquid slurry goes to the bottom tank and is discharged as waste. When the desulfurization solution flow impacts the sieve tray, the downward momentum will be converted to the upper surface of the sieve tray. As a result, a thin liquid layer can be developed above the sieve tray, which is the so-called the slurry layer. And the volume fraction value within the slurry layer is around 0.3~0.7. Therefore, the liquid phase can't be considered as a discrete phase under the Eulerian-Lagrangian framework. Besides, there is a liquid column through the sieve trays. The downward liquid column becomes narrow as it interacts with the upward gas flow. After the flue gas flows into the major desulfurization zone, the flow direction of the flue gas is upward (+y) in the tube between the liquid column and the solid boundary of the FGD tower. As a result, the flue gas near the liquid column may be rolled down to slurry layer, which developed a vortex or a circulation zone between any two sieve trays. The vortex structure between two sieve trays results in a sufficient large two-phase contact area. It also increases the number of times that the flue gas interacts with the desulfurization liquid. On the other hand, the sieve trays improve the two-phase mixing, which may improve the SO2 removal efficiency.

Keywords: Computational Fluid Dynamics (CFD), Eulerian-Eulerian Model, Flue Gas Desulfurization (FGD), perforated sieve tray

Procedia PDF Downloads 284
399 Simulation and Thermal Evaluation of Containers Using PCM in Different Weather Conditions of Chile: Energy Savings in Lightweight Constructions

Authors: Paula Marín, Mohammad Saffari, Alvaro de Gracia, Luisa F. Cabeza, Svetlana Ushak

Abstract:

Climate control represents an important issue when referring to energy consumption of buildings and associated expenses, both in installation or operation periods. The climate control of a building relies on several factors. Among them, localization, orientation, architectural elements, sources of energy used, are considered. In order to study the thermal behaviour of a building set up, the present study proposes the use of energy simulation program Energy Plus. In recent years, energy simulation programs have become important tools for evaluation of thermal/energy performance of buildings and facilities. Besides, the need to find new forms of passive conditioning in buildings for energy saving is a critical component. The use of phase change materials (PCMs) for heat storage applications has grown in importance due to its high efficiency. Therefore, the climatic conditions of Northern Chile: high solar radiation and extreme temperature fluctuations ranging from -10°C to 30°C (Calama city), low index of cloudy days during the year are appropriate to take advantage of solar energy and use passive systems in buildings. Also, the extensive mining activities in northern Chile encourage the use of large numbers of containers to harbour workers during shifts. These containers are constructed with lightweight construction systems, requiring heating during night and cooling during day, increasing the HVAC electricity consumption. The use of PCM can improve thermal comfort and reduce the energy consumption. The objective of this study was to evaluate the thermal and energy performance of containers of 2.5×2.5×2.5 m3, located in four cities of Chile: Antofagasta, Calama, Santiago, and Concepción. Lightweight envelopes, typically used in these building prototypes, were evaluated considering a container without PCM inclusion as the reference building and another container with PCM-enhanced envelopes as a test case, both of which have a door and a window in the same wall, orientated in two directions: North and South. To see the thermal response of these containers in different seasons, the simulations were performed considering a period of one year. The results show that higher energy savings for the four cities studied are obtained when the distribution of door and window in the container is in the north direction because of higher solar radiation incidence. The comparison of HVAC consumption and energy savings in % for north direction of door and window are summarised. Simulation results show that in the city of Antofagasta 47% of heating energy could be saved and in the cities of Calama and Concepción the biggest savings in terms of cooling could be achieved since PCM reduces almost all the cooling demand. Currently, based on simulation results, four containers have been constructed and sized with the same structural characteristics carried out in simulations, that are, containers with/without PCM, with door and window in one wall. Two of these containers will be placed in Antofagasta and two containers in a copper mine near to Calama, all of them will be monitored for a period of one year. The simulation results will be validated with experimental measurements and will be reported in the future.

Keywords: energy saving, lightweight construction, PCM, simulation

Procedia PDF Downloads 284
398 Optimization of Heat Source Assisted Combustion on Solid Rocket Motors

Authors: Minal Jain, Vinayak Malhotra

Abstract:

Solid Propellant ignition consists of rapid and complex events comprising of heat generation and transfer of heat with spreading of flames over the entire burning surface area. Proper combustion and thus propulsion depends heavily on the modes of heat transfer characteristics and cavity volume. Fire safety is an integral component of a successful rocket flight failing to which may lead to overall failure of the rocket. This leads to enormous forfeiture in resources viz., money, time, and labor involved. When the propellant is ignited, thrust is generated and the casing gets heated up. This heat adds on to the propellant heat and the casing, if not at proper orientation starts burning as well, leading to the whole rocket being completely destroyed. This has necessitated active research efforts emphasizing a comprehensive study on the inter-energy relations involved for effective utilization of the solid rocket motors for better space missions. Present work is focused on one of the major influential aspects of this detrimental burning which is the presence of an external heat source, in addition to a potential heat source which is already ignited. The study is motivated by the need to ensure better combustion and fire safety presented experimentally as a simplified small-scale mode of a rocket carrying a solid propellant inside a cavity. The experimental setup comprises of a paraffin wax candle as the pilot fuel and incense stick as the external heat source. The candle is fixed and the incense stick position and location is varied to investigate the find the influence of the pilot heat source. Different configurations of the external heat source presence with separation distance are tested upon. Regression rates of the pilot thin solid fuel are noted to fundamentally understand the non-linear heat and mass transfer which is the governing phenomenon. An attempt is made to understand the phenomenon fundamentally and the mechanism governing it. Results till now indicate non-linear heat transfer assisted with the occurrence of flaming transition at selected critical distances. With an increase in separation distance, the effect is noted to drop in a non-monotonic trend. The parametric study results are likely to provide useful physical insight about the governing physics and utilization in proper testing, validation, material selection, and designing of solid rocket motors with enhanced safety.

Keywords: combustion, propellant, regression, safety

Procedia PDF Downloads 161
397 Inertial Spreading of Drop on Porous Surfaces

Authors: Shilpa Sahoo, Michel Louge, Anthony Reeves, Olivier Desjardins, Susan Daniel, Sadik Omowunmi

Abstract:

The microgravity on the International Space Station (ISS) was exploited to study the imbibition of water into a network of hydrophilic cylindrical capillaries on time and length scales long enough to observe details hitherto inaccessible under Earth gravity. When a drop touches a porous medium, it spreads as if laid on a composite surface. The surface first behaves as a hydrophobic material, as liquid must penetrate pores filled with air. When contact is established, some of the liquid is drawn into pores by a capillarity that is resisted by viscous forces growing with length of the imbibed region. This process always begins with an inertial regime that is complicated by possible contact pinning. To study imbibition on Earth, time and distance must be shrunk to mitigate gravity-induced distortion. These small scales make it impossible to observe the inertial and pinning processes in detail. Instead, in the International Space Station (ISS), astronaut Luca Parmitano slowly extruded water spheres until they touched any of nine capillary plates. The 12mm diameter droplets were large enough for high-speed GX1050C video cameras on top and side to visualize details near individual capillaries, and long enough to observe dynamics of the entire imbibition process. To investigate the role of contact pinning, a text matrix was produced which consisted nine kinds of porous capillary plates made of gold-coated brass treated with Self-Assembled Monolayers (SAM) that fixed advancing and receding contact angles to known values. In the ISS, long-term microgravity allowed unambiguous observations of the role of contact line pinning during the inertial phase of imbibition. The high-speed videos of spreading and imbibition on the porous plates were analyzed using computer vision software to calculate the radius of the droplet contact patch with the plate and height of the droplet vs time. These observations are compared with numerical simulations and with data that we obtained at the ESA ZARM free-fall tower in Bremen with a unique mechanism producing relatively large water spheres and similarity in the results were observed. The data obtained from the ISS can be used as a benchmark for further numerical simulations in the field.

Keywords: droplet imbibition, hydrophilic surface, inertial phase, porous medium

Procedia PDF Downloads 139
396 Analysis of Epileptic Electroencephalogram Using Detrended Fluctuation and Recurrence Plots

Authors: Mrinalini Ranjan, Sudheesh Chethil

Abstract:

Epilepsy is a common neurological disorder characterised by the recurrence of seizures. Electroencephalogram (EEG) signals are complex biomedical signals which exhibit nonlinear and nonstationary behavior. We use two methods 1) Detrended Fluctuation Analysis (DFA) and 2) Recurrence Plots (RP) to capture this complex behavior of EEG signals. DFA considers fluctuation from local linear trends. Scale invariance of these signals is well captured in the multifractal characterisation using detrended fluctuation analysis (DFA). Analysis of long-range correlations is vital for understanding the dynamics of EEG signals. Correlation properties in the EEG signal are quantified by the calculation of a scaling exponent. We report the existence of two scaling behaviours in the epileptic EEG signals which quantify short and long-range correlations. To illustrate this, we perform DFA on extant ictal (seizure) and interictal (seizure free) datasets of different patients in different channels. We compute the short term and long scaling exponents and report a decrease in short range scaling exponent during seizure as compared to pre-seizure and a subsequent increase during post-seizure period, while the long-term scaling exponent shows an increase during seizure activity. Our calculation of long-term scaling exponent yields a value between 0.5 and 1, thus pointing to power law behaviour of long-range temporal correlations (LRTC). We perform this analysis for multiple channels and report similar behaviour. We find an increase in the long-term scaling exponent during seizure in all channels, which we attribute to an increase in persistent LRTC during seizure. The magnitude of the scaling exponent and its distribution in different channels can help in better identification of areas in brain most affected during seizure activity. The nature of epileptic seizures varies from patient-to-patient. To illustrate this, we report an increase in long-term scaling exponent for some patients which is also complemented by the recurrence plots (RP). RP is a graph that shows the time index of recurrence of a dynamical state. We perform Recurrence Quantitative analysis (RQA) and calculate RQA parameters like diagonal length, entropy, recurrence, determinism, etc. for ictal and interictal datasets. We find that the RQA parameters increase during seizure activity, indicating a transition. We observe that RQA parameters are higher during seizure period as compared to post seizure values, whereas for some patients post seizure values exceeded those during seizure. We attribute this to varying nature of seizure in different patients indicating a different route or mechanism during the transition. Our results can help in better understanding of the characterisation of epileptic EEG signals from a nonlinear analysis.

Keywords: detrended fluctuation, epilepsy, long range correlations, recurrence plots

Procedia PDF Downloads 176
395 Biofiltration Odour Removal at Wastewater Treatment Plant Using Natural Materials: Pilot Scale Studies

Authors: D. Lopes, I. I. R. Baptista, R. F. Vieira, J. Vaz, H. Varela, O. M. Freitas, V. F. Domingues, R. Jorge, C. Delerue-Matos, S. A. Figueiredo

Abstract:

Deodorization is nowadays a need in wastewater treatment plants. Nitrogen and sulphur compounds, volatile fatty acids, aldehydes and ketones are responsible for the unpleasant odours, being ammonia, hydrogen sulphide and mercaptans the most common pollutants. Although chemical treatments of the air extracted are efficient, these are more expensive than biological treatments, namely due the use of chemical reagents (commonly sulphuric acid, sodium hypochlorite and sodium hydroxide). Biofiltration offers the advantage of avoiding the use of reagents (only in some cases, nutrients are added in order to increase the treatment efficiency) and can be considered a sustainable process when the packing medium used is of natural origin. In this work the application of some natural materials locally available was studied both at laboratory and pilot scale, in a real wastewater treatment plant. The materials selected for this study were indigenous Portuguese forest materials derived from eucalyptus and pinewood, such as woodchips and bark, and coconut fiber was also used for comparison purposes. Their physico-chemical characterization was performed: density, moisture, pH, buffer and water retention capacity. Laboratory studies involved batch adsorption studies for ammonia and hydrogen sulphide removal and evaluation of microbiological activity. Four pilot-scale biofilters (1 cubic meter volume) were installed at a local wastewater treatment plant treating odours from the effluent receiving chamber. Each biofilter contained a different packing material consisting of mixtures of eucalyptus bark, pine woodchips and coconut fiber, with added buffering agents and nutrients. The odour treatment efficiency was monitored over time, as well as other operating parameters. The operation at pilot scale suggested that between the processes involved in biofiltration - adsorption, absorption and biodegradation - the first dominates at the beginning, while the biofilm is developing. When the biofilm is completely established, and the adsorption capacity of the material is reached, biodegradation becomes the most relevant odour removal mechanism. High odour and hydrogen sulphide removal efficiencies were achieved throughout the testing period (over 6 months), confirming the suitability of the materials selected, and mixtures thereof prepared, for biofiltration applications.

Keywords: ammonia hydrogen sulphide and removal, biofiltration, natural materials, odour control in wastewater treatment plants

Procedia PDF Downloads 302
394 Investigation of Processing Conditions on Rheological Features of Emulsion Gels and Oleogels Stabilized by Biopolymers

Authors: M. Sarraf, J. E. Moros, M. C. Sánchez

Abstract:

Oleogels are self-standing systems that are able to trap edible liquid oil into a tridimensional network and also help to use less fat by forming crystallization oleogelators. There are different ways to generate oleogelation and oil structuring, including direct dispersion, structured biphasic systems, oil sorption, and indirect method (emulsion-template). The selection of processing conditions as well as the composition of the oleogels is essential to obtain a stable oleogel with characteristics suitable for its purpose. In this sense, one of the ingredients widely used in food products to produce oleogels and emulsions is polysaccharides. Basil seed gum (BSG), with the scientific name Ocimum basilicum, is a new native polysaccharide with high viscosity and pseudoplastic behavior because of its high molecular weight in the food industry. Also, proteins can stabilize oil in water due to the presence of amino and carboxyl moieties that result in surface activity. Whey proteins are widely used in the food industry due to available, cheap ingredients, nutritional and functional characteristics such as emulsifier and a gelling agent, thickening, and water-binding capacity. In general, the interaction of protein and polysaccharides has a significant effect on the food structures and their stability, like the texture of dairy products, by controlling the interactions in macromolecular systems. Using edible oleogels as oil structuring helps for targeted delivery of a component trapped in a structural network. Therefore, the development of efficient oleogel is essential in the food industry. A complete understanding of the important points, such as the ratio oil phase, processing conditions, and concentrations of biopolymers that affect the formation and stability of the emulsion, can result in crucial information in the production of a suitable oleogel. In this research, the effects of oil concentration and pressure used in the manufacture of the emulsion prior to obtaining the oleogel have been evaluated through the analysis of droplet size and rheological properties of obtained emulsions and oleogels. The results show that the emulsion prepared in the high-pressure homogenizer (HPH) at higher pressure values has smaller droplet sizes and a higher uniformity in the size distribution curve. On the other hand, in relation to the rheological characteristics of the emulsions and oleogels obtained, the predominantly elastic character of the systems must be noted, as they present values of the storage modulus higher than those of losses, also showing an important plateau zone, typical of structured systems. In the same way, if steady-state viscous flow tests have been analyzed on both emulsions and oleogels, the result is that, once again, the pressure used in the homogenizer is an important factor for obtaining emulsions with adequate droplet size and the subsequent oleogel. Thus, various routes for trapping oil inside a biopolymer matrix with adjustable mechanical properties could be applied for the creation of the three-dimensional network in order to the oil absorption and creating oleogel.

Keywords: basil seed gum, particle size, viscoelastic properties, whey protein

Procedia PDF Downloads 66
393 Systematic Identification of Noncoding Cancer Driver Somatic Mutations

Authors: Zohar Manber, Ran Elkon

Abstract:

Accumulation of somatic mutations (SMs) in the genome is a major driving force of cancer development. Most SMs in the tumor's genome are functionally neutral; however, some cause damage to critical processes and provide the tumor with a selective growth advantage (termed cancer driver mutations). Current research on functional significance of SMs is mainly focused on finding alterations in protein coding sequences. However, the exome comprises only 3% of the human genome, and thus, SMs in the noncoding genome significantly outnumber those that map to protein-coding regions. Although our understanding of noncoding driver SMs is very rudimentary, it is likely that disruption of regulatory elements in the genome is an important, yet largely underexplored mechanism by which somatic mutations contribute to cancer development. The expression of most human genes is controlled by multiple enhancers, and therefore, it is conceivable that regulatory SMs are distributed across different enhancers of the same target gene. Yet, to date, most statistical searches for regulatory SMs have considered each regulatory element individually, which may reduce statistical power. The first challenge in considering the cumulative activity of all the enhancers of a gene as a single unit is to map enhancers to their target promoters. Such mapping defines for each gene its set of regulating enhancers (termed "set of regulatory elements" (SRE)). Considering multiple enhancers of each gene as one unit holds great promise for enhancing the identification of driver regulatory SMs. However, the success of this approach is greatly dependent on the availability of comprehensive and accurate enhancer-promoter (E-P) maps. To date, the discovery of driver regulatory SMs has been hindered by insufficient sample sizes and statistical analyses that often considered each regulatory element separately. In this study, we analyzed more than 2,500 whole-genome sequence (WGS) samples provided by The Cancer Genome Atlas (TCGA) and The International Cancer Genome Consortium (ICGC) in order to identify such driver regulatory SMs. Our analyses took into account the combinatorial aspect of gene regulation by considering all the enhancers that control the same target gene as one unit, based on E-P maps from three genomics resources. The identification of candidate driver noncoding SMs is based on their recurrence. We searched for SREs of genes that are "hotspots" for SMs (that is, they accumulate SMs at a significantly elevated rate). To test the statistical significance of recurrence of SMs within a gene's SRE, we used both global and local background mutation rates. Using this approach, we detected - in seven different cancer types - numerous "hotspots" for SMs. To support the functional significance of these recurrent noncoding SMs, we further examined their association with the expression level of their target gene (using gene expression data provided by the ICGC and TCGA for samples that were also analyzed by WGS).

Keywords: cancer genomics, enhancers, noncoding genome, regulatory elements

Procedia PDF Downloads 104
392 Increase in the Shelf Life Anchovy (Engraulis ringens) from Flaying then Bleeding in a Sodium Citrate Solution

Authors: Santos Maza, Enzo Aldoradin, Carlos Pariona, Eliud Arpi, Maria Rosales

Abstract:

The objective of this study was to investigate the effect of flaying then bleeding anchovy (Engraulis ringens) immersed within a sodium citrate solution. Anchovy is a pelagic fish that readily deteriorates due to its high content of polyunsaturated fatty acids. As such, within the Peruvian food industry, the shelf life of frozen anchovy is explicitly 6 months, this short duration imparts a barrier to use for direct consumption human. Thus, almost all capture of anchovy by the fishing industry is eventually used in the production of fishmeal. We offer this an alternative to its typical production process in order to increase shelf life. In the present study, 100 kg of anchovies were captured and immediately mixed with ice on ship, maintaining a high quality sensory metric (e.g., with color blue in back) while still arriving for processing less than 2 h after capture. Anchovies with fat content of 3% were immediately flayed (i.e., reducing subcutaneous fat), beheaded, gutted and bled (i.e., removing hemoglobin) by immersion in water (Control) or in a solution of 2.5% sodium citrate (treatment), then subsequently frozen at -30 °C for 8 h in 2 kg batches. Subsequent glazing and storage at -25 °C for 14 months completed the experiments parameters. The peroxide value (PV), acidity (A), fatty acid profile (FAP), thiobarbituric acid reactive substances (TBARS), heme iron (HI), pH and sensory attributes of the samples were evaluated monthly. The results of the PV, TBARS, A, pH and sensory analyses displayed significant differences (p<0.05) between treatment and control sample; where the sodium citrate treated samples showed increased preservation features. Specifically, at the beginning of the study, flayed, beheaded, gutted and bled anchovies displayed low content of fat (1.5%) with moderate amount of PV, A and TBARS, and were not rejected by sensory analysis. HI values and FAP displayed varying behavior, however, results of HI did not reveal a decreasing trend. This result is indicative of the fact that levels of iron were maintained as HI and did not convert into no heme iron, which is known to be the primary catalyst of lipid oxidation in fish. According to the FAP results, the major quantity of fatty acid was of polyunsaturated fatty acid (PFA) followed by saturated fatty acid (SFA) and then monounsaturated fatty acid (MFA). According to sensory analysis, the shelf life of flayed, beheaded and gutted anchovy (control and treatment) was 14 months. This shelf life was reached at laboratory level because high quality anchovies were used and immediately flayed, beheaded, gutted, bled and frozen. Therefore, it is possible to maintain the shelf life of anchovies for a long time. Overall, this method displayed a large increase in shelf life relative to that commonly seen for anchovies in this industry. However, these results should be extrapolated at industrial scales to propose better processing conditions and improve the quality of anchovy for direct human consumption.

Keywords: citrate sodium solution, heme iron, polyunsaturated fatty acids, shelf life of frozen anchovy

Procedia PDF Downloads 294
391 Assessment of Environmental Mercury Contamination from an Old Mercury Processing Plant 'Thor Chemicals' in Cato Ridge, KwaZulu-Natal, South Africa

Authors: Yohana Fessehazion

Abstract:

Mercury is a prominent example of a heavy metal contaminant in the environment, and it has been extensively investigated for its potential health risk in humans and other organisms. In South Africa, massive mercury contamination happened in1980s when the England-based mercury reclamation processing plant relocated to Cato Ridge, KwaZulu-Natal Province, and discharged mercury waste into the Mngceweni River. This mercury waste discharge resulted in high mercury concentration that exceeded the acceptable levels in Mngceweni River, Umgeni River, and human hair of the nearby villagers. This environmental issue raised the alarm, and over the years, several environmental assessments were reported the dire environmental crises resulting from the Thor Chemicals (now known as Metallica Chemicals) and urged the immediate removal of the around 3,000 tons of mercury waste stored in the factory storage facility over two decades. Recently theft of some containers with the toxic substance from the Thor Chemicals warehouse and the subsequent fire that ravaged the facility furtherly put the factory on the spot escalating the urgency of left behind deadly mercury waste removal. This project aims to investigate the mercury contamination leaking from an old Thor Chemicals mercury processing plant. The focus will be on sediments, water, terrestrial plants, and aquatic weeds such as the prominent water hyacinth weeds in the nearby water systems of Mngceweni River, Umgeni River, and Inanda Dam as a bio-indicator and phytoremediator for mercury pollution. Samples will be collected in spring around October when the condition is favourable for microbial activity to methylate mercury incorporated in sediments and blooming season for some aquatic weeds, particularly water hyacinth. Samples of soil, sediment, water, terrestrial plant, and aquatic weed will be collected per sample site from the point of source (Thor Chemicals), Mngceweni River, Umgeni River, and the Inanda Dam. One-way analysis of variance (ANOVA) tests will be conducted to determine any significant differences in the Hg concentration among all sampling sites, followed by Least Significant Difference post hoc test to determine if mercury contamination varies with the gradient distance from the source point of pollution. The flow injection atomic spectrometry (FIAS) analysis will also be used to compare the mercury sequestration between the different plant tissues (roots and stems). The principal component analysis is also envisaged for use to determine the relationship between the source of mercury pollution and any of the sampling points (Umgeni and Mngceweni Rivers and the Inanda Dam). All the Hg values will be expressed in µg/L or µg/g in order to compare the result with the previous studies and regulatory standards. Sediments are expected to have relatively higher levels of Hg compared to the soils, and aquatic macrophytes, water hyacinth weeds are expected to accumulate a higher concentration of mercury than terrestrial plants and crops.

Keywords: mercury, phytoremediation, Thor chemicals, water hyacinth

Procedia PDF Downloads 222