Search results for: power loss cost
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14041

Search results for: power loss cost

481 Challenges in Environmental Governance: A Case Study of Risk Perceptions of Environmental Agencies Involved in Flood Management in the Hawkesbury-Nepean Region, Australia

Authors: S. Masud, J. Merson, D. F. Robinson

Abstract:

The management of environmental resources requires engagement of a range of stakeholders including public/private agencies and different community groups to implement sustainable conservation practices. The challenge which is often ignored is the analysis of agencies involved and their power relations. One of the barriers identified is the difference in risk perceptions among the agencies involved that leads to disjointed efforts of assessing and managing risks. Wood et al 2012, explains that it is important to have an integrated approach to risk management where decision makers address stakeholder perspectives. This is critical for an effective risk management policy. This abstract is part of a PhD research that looks into barriers to flood management under a changing climate and intends to identify bottlenecks that create maladaptation. Experiences are drawn from international practices in the UK and examined in the context of Australia through exploring the flood governance in a highly flood-prone region in Australia: the Hawkesbury Ne-pean catchment as a case study. In this research study several aspects of governance and management are explored: (i) the complexities created by the way different agencies are involved in assessing flood risks (ii) different perceptions on acceptable flood risk level; (iii) perceptions on community engagement in defining acceptable flood risk level; (iv) Views on a holistic flood risk management approach; and, (v) challenges of centralised information system. The study concludes that the complexity of managing a large catchment is exacerbated by the difference in the way professionals perceive the problem. This has led to: (a) different standards for acceptable risks; (b) inconsistent attempt to set-up a regional scale flood management plan beyond the jurisdictional boundaries: (c) absence of a regional scale agency with license to share and update information (d) Lack of forums for dialogue with insurance companies to ensure an integrated approach to flood management. The research takes the Hawkesbury-Nepean catchment as case example and draws from literary evidence from around the world. In addition, conclusions were extrapolated from eighteen semi-structured interviews from agencies involved in flood risk management in the Hawkesbury-Nepean catchment of NSW, Australia. The outcome of this research is to provide a better understanding of complexity in assessing risks against a rapidly changing climate and contribute towards developing effective risk communication strategies thus enabling better management of floods and achieving increased level of support from insurance companies, real-estate agencies, state and regional risk managers and the affected communities.

Keywords: adaptive governance, flood management, flood risk communication, stakeholder risk perceptions

Procedia PDF Downloads 265
480 Decentralized Forest Policy for Natural Sal (Shorea robusta) Forests Management in the Terai Region of Nepal

Authors: Medani Prasad Rijal

Abstract:

The study outlines the impacts of decentralized forest policy on natural Sal (shorea robusta) forests in the Terai region of Nepal. The government has implemented community forestry program to manage the forest resources and improve the livelihood of local people collectively. The forest management authorities such as conserve, manage, develop and use of forest resources were shifted to the local communities, however, the ownership right of the forestland retained by the government. Local communities took the decision on harvesting, distribution, and sell of forest products by fixing the prices independently. The local communities were putting the low value of forest products and distributed among the user households on the name of collective decision. The decision of low valuation is devaluating the worth of forest products. Therefore, the study hypothesized that decision-making capacities are equally prominent next to the decentralized policy and program formulation. To accomplish the study, individual to group level discussions and questionnaire survey methods were applied with executive committee members and user households. The study revealed that the local intuition called Community Forest User Group (CFUG) committee normally took the decisions on consensus basis. Considering to the access and affording capacity of user households having poor economic backgrounds, low pricing mechanism of forest products has been practiced, even though the Sal timber is far expensive in the local market. The local communities thought that low pricing mechanism is accessible to all user households from poor to better off households. However, the analysis of forest products distribution opposed the assumption as most of the Sal timber, which is the most valuable forest product of community forest only purchased by the limited households of better economic conditions. Since the Terai region is heterogeneous by socio-economic conditions, better off households always have higher affording capacity and possibility of taking higher timber benefits because of low price mechanism. On the other hand, the minimum price rate of forest products has poor contribution in community fund collection. Consequently, it has poor support to carry out poverty alleviation activities to poor people. The local communities have been fixed Sal timber price rate around three times cheaper than normal market price, which is a strong evidence of forest product devaluation itself. Finally, the study concluded that the capacity building of local executives as the decision-makers of natural Sal forests is equally indispensable next to the policy and program formulation for effective decentralized forest management. Unilateral decentralized forest policy may devaluate the forest products rather than devolve of power to the local communities and empower to them.

Keywords: community forestry program, decentralized forest policy, Nepal, Sal forests, Terai

Procedia PDF Downloads 319
479 Characteristics-Based Lq-Control of Cracking Reactor by Integral Reinforcement

Authors: Jana Abu Ahmada, Zaineb Mohamed, Ilyasse Aksikas

Abstract:

The linear quadratic control system of hyperbolic first order partial differential equations (PDEs) are presented. The aim of this research is to control chemical reactions. This is achieved by converting the PDEs system to ordinary differential equations (ODEs) using the method of characteristics to reduce the system to control it by using the integral reinforcement learning. The designed controller is applied to a catalytic cracking reactor. Background—Transport-Reaction systems cover a large chemical and bio-chemical processes. They are best described by nonlinear PDEs derived from mass and energy balances. As a main application to be considered in this work is the catalytic cracking reactor. Indeed, the cracking reactor is widely used to convert high-boiling, high-molecular weight hydrocarbon fractions of petroleum crude oils into more valuable gasoline, olefinic gases, and others. On the other hand, control of PDEs systems is an important and rich area of research. One of the main control techniques is feedback control. This type of control utilizes information coming from the system to correct its trajectories and drive it to a desired state. Moreover, feedback control rejects disturbances and reduces the variation effects on the plant parameters. Linear-quadratic control is a feedback control since the developed optimal input is expressed as feedback on the system state to exponentially stabilize and drive a linear plant to the steady-state while minimizing a cost criterion. The integral reinforcement learning policy iteration technique is a strong method that solves the linear quadratic regulator problem for continuous-time systems online in real time, using only partial information about the system dynamics (i.e. the drift dynamics A of the system need not be known), and without requiring measurements of the state derivative. This is, in effect, a direct (i.e. no system identification procedure is employed) adaptive control scheme for partially unknown linear systems that converges to the optimal control solution. Contribution—The goal of this research is to Develop a characteristics-based optimal controller for a class of hyperbolic PDEs and apply the developed controller to a catalytic cracking reactor model. In the first part, developing an algorithm to control a class of hyperbolic PDEs system will be investigated. The method of characteristics will be employed to convert the PDEs system into a system of ODEs. Then, the control problem will be solved along the characteristic curves. The reinforcement technique is implemented to find the state-feedback matrix. In the other half, applying the developed algorithm to the important application of a catalytic cracking reactor. The main objective is to use the inlet fraction of gas oil as a manipulated variable to drive the process state towards desired trajectories. The outcome of this challenging research would yield the potential to provide a significant technological innovation for the gas industries since the catalytic cracking reactor is one of the most important conversion processes in petroleum refineries.

Keywords: PDEs, reinforcement iteration, method of characteristics, riccati equation, cracking reactor

Procedia PDF Downloads 70
478 Changes in Attitudes of State Towards Orthodox Church: Greek Case after Eurozone Crisis in Alexis Tsipras Era

Authors: Zeynep Selin Balci, Altug Gunal

Abstract:

Religion has always an effect on the policies of states. In the case of religion having a central role in defining identity, especially when becoming an independent state, the bond between religious authority and state cannot easily be broken. As independence of Greece from the Ottoman Empire was acquired at the same time with the creation of its own church under the name of the Church of Greece by declaring its independence from the Greek Orthodox Patriarchate in Istanbul, the new church became an important part of Greek national identity. As the Church has the ability to influence Greeks, its rituals, public appearances, and practices are used to provide support to the state. Although there sometimes have been controversies between church and state, it has always been a fact that church is an integral part of the state, which is proved by that paying the salaries of priest by state payroll and them being naturally civil servants. European Union membership, on the other hand, has a changing impact on this relationship. This impact started to be more visible in 2000 when then government decided to exclude the religion section from identity cards. Church’s reaction was to gather people around recalling their religious identity and followed by redefining the content of nationality, which aspired nationalist fronts. After 2015 when leftist coalition Syriza and its self-described atheist leader came to power, the situation for nationalists and Church became more tangling in addition to the economic crisis started in 2010 and evolved into the Eurozone crisis by affecting not only Greece but also other members. Although the church did not have direct confrontations with the government, the fact that Tsipras refused to take the oath on Bible created tensions because it was not acceptable for a state whose Constitution starts ‘in the name of the Holy, Consubstantial and Indivisible Trinity’. Moreover, austerity measures to overcome the economic crisis, which affected the everyday life of citizens in terms of both prices and salaries, did not harm the church’s economic situation much. Considering church being the second biggest landowner after state and paying no taxes, the fact that church was exempt from austerity measures showed to the government the necessity to find a way to make church contribute to solution for the crisis. In 2018, when the government agreed with the head of the church on cutting off the priests from government payroll automatically meaning to end priests’ civil servant status, it created tensions both for church and in society. As a result of the elections held in July 2019, Tsipras could not have the chance to apply the decision as he left the office. In light of these, this study aims to analyze the position of the church in the economic crisis and its effects on Tsipras term. In order to sufficiently understand this, it is to look at the historical changing points of Church’s influence in Greek’s eyes.

Keywords: Eurozone crisis, Greece, Orthodox Church, Tsipras

Procedia PDF Downloads 109
477 An Exploration of the Experiences of Women in Polygamous Marriages: A Case Study of Matizha Village, Masvingo, Zimbabwe

Authors: Flora Takayindisa, Tsoaledi Thobejane, Thizwilondi Mudau

Abstract:

This study highlights what people in polygamous marriages face on a daily basis. It argues that there are more disadvantages for women in polygamous marriages than their counterparts in monogamous relationships. The study further suggests that the patriarchal power structure seems to take a powerful and effective role on polygamous marriages in our societies, particularly in Zimbabwe where this study took place. The study explored the intricacies of polygamous marriages and how these dominances can be resolved. The research is therefore presented through the ‘lived realities’ of the affected women in polygamous marriages in Gutu District located in Masvingo Province of Zimbabwe. Polygamous marriages are practised in different societies. Some women who are practising a polygamous lifestyle are emotionally and physically abused in their relationships. Evidence also suggests children from polygamous marriages also suffer psychologically when their fathers take other wives. Relationships within the family are very difficult because of the husband’s seeming favouritism for one wife. Children are mostly affected by disputes between co-wives and they often lack quality time with their fathers. There are mixed feelings about polygamous marriages. There are some people who condemn it saying inhumane. However, considerations must be made of what it might mean to other people who do not have choices of any other form of marriage. The other factor that has to be noted is that polygamous marriages are not always negative. There are some positive things that result from the polygamous marriages. The study was conducted in a village called Matizha. In the study, a qualitative research approach was employed to stimulate awareness of the social, cultural, religious and the effect of economic factors in polygamous marriages. This approach facilitates a unique understanding of the experiences of women in polygamous marriages, their experiences being both negative and positive. The qualitative type of research method enabled the respondents to have an open minded when they were being asked questions. The researcher utilised the feminist theory in the study. The researcher employed guide interviews to acquire information from the participants. The chapter focuses on the participants who took part in the study, how the participants were selected, ethical considerations, data collection, the interview process, the research instruments and the summary. The data was obtained using a guided interview for all the respondents ranging from all ages who are in polygamous marriages. The researcher presented the demographic information of the participants. Thereafter, the researcher presented other aspects of the data collection like social factors, economic factors and also religious affiliation. The conclusions and recommendations are made from the four main themes that came up from the discussions. The recommendations were made for the women, the policies and laws affecting women, and finally, recommendations for future research. It is believed that the overall objectives of the study have been met and research questions have been answered based on the findings of the study discussed.

Keywords: co-wives, egalitarianism, experiences, polyandry, polygamy, woman

Procedia PDF Downloads 240
476 Implementing a Comprehensive Emergency Care and Life Support Course in a Low- and Middle-Income Country Setting: A Survey of Learners in India

Authors: Vijayabhaskar Reddy Kandula, Peter Provost Taillac, Balasubramanya M. A., Ram Krishnan Nair, Gokul Toshnival, Vibhu Dhawan, Vijaya Karanam, Buffy Cramer

Abstract:

Introduction: The lack of Emergency Care Services (ECS) is a cause of extensive and serious public health problems in low- and middle-income countries (LMIC), Many LMIC countries have ambulance services that allow timely transfer of ill patients but due to poor care during the ‘Golden Hour’ many deaths occur which are otherwise preventable. Lack of adequate training as evidenced by a study in India is a major reason for poor care during the ‘Golden Hour’. Adapting developed country models which includes staffing specialty-trained doctors in emergency care, is neither feasible nor guarantees cost-effective ECS. Methods: Based on our assessment and felt needs by first-line doctors providing emergency care in 2014, Rajiv Gandhi Health Sciences University’s JeevaRaksha Trust in partnership with the University of Utah, USA, designed, piloted and successfully implemented a 4-day Comprehensive-Emergency Care and Life Support course (C-ECLS) for allopathic doctors. 1730 doctors completed the 4-day course between June 2014 and December- 2020. Subsequently, we conducted a survey to investigate the utilization rates and usefulness of the training. 1662 were contacted but only 309 completed the survey. The respondents had the following designations: Senior faculty (33%), junior faculty (25), Resident (16%), Private-Practitioners (8%), Medical-Officer (16%) and not-working (11%). 51% were generalists (51%) and the rest were specialists (>30 specialties). Results: 97% (271/280) felt they are better doctors because of C-ECLS. 79% (244/309) reported that training helped to save life- specialists more likely than generalists (91% v/s 68%. P<0.05). 64% agreed that they were confident of managing COVID-19 symptomatic patients better because of C-ECLS. 27% (77) were neutral; 9% (24) disagreed. 66% agreed that training helps to be confident in managing COVID-19 critically ill patients. 26% (72) were neutral; 8% (23) disagreed. Frequency of use of C-ECLS skills: Hemorrhage-control (70%), Airway (67%), circulation skills (62%), Safe-transport and communication (60%), managing critically ill patients (58%), cardiac arrest (51%), Trauma (49%), poisoning/animal bites/stings (44%), neonatal-resuscitation (39%), breathing (36%), post-partum-hemorrhage and eclampsia (35%). Among those who used the skills, the majority (ranging from (88%-94%) reported that they were able to apply the skill more effectively because of ECLS training. Conclusion: JeevaRaksha’s C-ECLS is the world’s first comprehensive training. It improves the confidence of front-line doctors and enables them to provide quality care during the ‘Golden Hour’ of emergency. It also prepares doctors to manage unknown emergencies (e.g., COVID-19). C-ECLS was piloted in Morocco, and Uzbekistan and implemented countrywide in Bhutan. C-ECLS is relevant to most settings and offers a replicable model across LMIC.

Keywords: comprehensive emergency care and life support, training, capacity building, low- and middle-income countries, developing countries

Procedia PDF Downloads 49
475 Metalorganic Chemical Vapor Deposition Overgrowth on the Bragg Grating for Gallium Nitride Based Distributed Feedback Laser

Authors: Junze Li, M. Li

Abstract:

Laser diodes fabricated from the III-nitride material system are emerging solutions for the next generation telecommunication systems and optical clocks based on Ca at 397nm, Rb at 420.2nm and Yb at 398.9nm combined 556 nm. Most of the applications require single longitudinal optical mode lasers, with very narrow linewidth and compact size, such as communication systems and laser cooling. In this case, the GaN based distributed feedback (DFB) laser diode is one of the most effective candidates with gratings are known to operate with narrow spectra as well as high power and efficiency. Given the wavelength range, the period of the first-order diffraction grating is under 100 nm, and the realization of such gratings is technically difficult due to the narrow line width and the high quality nitride overgrowth based on the Bragg grating. Some groups have reported GaN DFB lasers with high order distributed feedback surface gratings, which avoids the overgrowth. However, generally the strength of coupling is lower than that with Bragg grating embedded into the waveguide within the GaN laser structure by two-step-epitaxy. Therefore, the overgrowth on the grating technology need to be studied and optimized. Here we propose to fabricate the fine step shape structure of first-order grating by the nanoimprint combined inductively coupled plasma (ICP) dry etching, then carry out overgrowth high quality AlGaN film by metalorganic chemical vapor deposition (MOCVD). Then a series of gratings with different period, depths and duty ratios are designed and fabricated to study the influence of grating structure to the nano-heteroepitaxy. Moreover, we observe the nucleation and growth process by step-by-step growth to study the growth mode for nitride overgrowth on grating, under the condition that the grating period is larger than the mental migration length on the surface. The AFM images demonstrate that a smooth surface of AlGaN film is achieved with an average roughness of 0.20 nm over 3 × 3 μm2. The full width at half maximums (FWHMs) of the (002) reflections in the XRD rocking curves are 278 arcsec for the AlGaN film, and the component of the Al within the film is 8% according to the XRD mapping measurement, which is in accordance with design values. By observing the samples with growth time changing from 200s, 400s to 600s, the growth model is summarized as the follow steps: initially, the nucleation is evenly distributed on the grating structure, as the migration length of Al atoms is low; then, AlGaN growth alone with the grating top surface; finally, the AlGaN film formed by lateral growth. This work contributed to carrying out GaN DFB laser by fabricating grating and overgrowth on the nano-grating patterned substrate by wafer scale, moreover, growth dynamics had been analyzed as well.

Keywords: DFB laser, MOCVD, nanoepitaxy, III-niitride

Procedia PDF Downloads 162
474 Armed Forces Special Powers Act and Human Rights in Nagaland

Authors: Khrukulu Khusoh

Abstract:

The strategies and tactics used by governments throughout the world to counter terrorism and insurgency over the past few decades include the declaration of states of siege or martial law, enactment of anti-terrorist legislation and strengthening of judicial powers. Some of these measures taken have been more successful than the other, but some have proved counterproductive, alienating the public from the authorities and further polarizing an already fractured political environment. Such cases of alienation and polarization can be seen in the northeastern states of India. The Armed Forces (Special Powers) Act which was introduced to curb insurgency in the remote jungles of the far-flung areas has remained a telling tale of agony in the north east India. Grievous trauma to humans through encounter killings, custodial deaths, unwarranted torture, exploitation of women and children in several ways have been reported in Nagaland, Manipur and other northeastern states where the Indian army has been exercising powers under the Armed Forces (Special Powers) Act. While terrorism and the insurgency are destructive of human rights, counter-terrorism does not necessarily restore and safeguard human rights. This special law has not proven effective particularly in dealing with terrorism and insurgency. The insurgency has persisted in the state of Nagaland even after sixty years notwithstanding the presence of a good number of special laws. There is a need to fight elements that threaten the security of a nation, but the methods chosen should be measured, otherwise the fight is lost. There has been no review on the effectiveness or failure of the act to realize its intended purpose. Nor was there any attempt on the part of the state to critically look at the violation of rights of innocent citizens by the state agencies. The Indian state keeps enacting laws, but none of these could be effectively applied as there was the absence of clarity of purpose. Therefore, every new law which has been enacted time and again to deal with security threats failed to bring any solution for the last six decades. The Indian state resorts to measures which are actually not giving anything in terms of strategic benefits but are short-term victories that might result in long-term tragedies. Therefore, right thinking citizens and human rights activists across the country feel that introduction of Armed Forces (Special Powers) Act was as much violation of human rights and its continuation is undesirable. What worried everyone is the arbitrary use, or rather misuse of power by the Indian armed forces particularly against the weaker sections of the society, including women. After having being subjected to indiscriminate abuse of that law, people of the north-east India have been demanding its revocation for a long time. The present paper attempts to critically examine the violation of human rights under Armed Forces (Special Powers) Act. It also attempts to bring out the impact of Armed Forces (Special Powers) Act on the Naga people.

Keywords: armed forces, insurgency, special laws, violence

Procedia PDF Downloads 478
473 Organizational Resilience in the Perspective of Supply Chain Risk Management: A Scholarly Network Analysis

Authors: William Ho, Agus Wicaksana

Abstract:

Anecdotal evidence in the last decade shows that the occurrence of disruptive events and uncertainties in the supply chain is increasing. The coupling of these events with the nature of an increasingly complex and interdependent business environment leads to devastating impacts that quickly propagate within and across organizations. For example, the recent COVID-19 pandemic increased the global supply chain disruption frequency by at least 20% in 2020 and is projected to have an accumulative cost of $13.8 trillion by 2024. This crisis raises attention to organizational resilience to weather business uncertainty. However, the concept has been criticized for being vague and lacking a consistent definition, thus reducing the significance of the concept for practice and research. This study is intended to solve that issue by providing a comprehensive review of the conceptualization, measurement, and antecedents of operational resilience that have been discussed in the supply chain risk management literature (SCRM). We performed a Scholarly Network Analysis, combining citation-based and text-based approaches, on 252 articles published from 2000 to 2021 in top-tier journals based on three parameters: AJG ranking and ABS ranking, UT Dallas and FT50 list, and editorial board review. We utilized a hybrid scholarly network analysis by combining citation-based and text-based approaches to understand the conceptualization, measurement, and antecedents of operational resilience in the SCRM literature. Specifically, we employed a Bibliographic Coupling Analysis in the research cluster formation stage and a Co-words Analysis in the research cluster interpretation and analysis stage. Our analysis reveals three major research clusters of resilience research in the SCRM literature, namely (1) supply chain network design and optimization, (2) organizational capabilities, and (3) digital technologies. We portray the research process in the last two decades in terms of the exemplar studies, problems studied, commonly used approaches and theories, and solutions provided in each cluster. We then provide a conceptual framework on the conceptualization and antecedents of resilience based on studies in these clusters and highlight potential areas that need to be studied further. Finally, we leverage the concept of abnormal operating performance to propose a new measurement strategy for resilience. This measurement overcomes the limitation of most current measurements that are event-dependent and focus on the resistance or recovery stage - without capturing the growth stage. In conclusion, this study provides a robust literature review through a scholarly network analysis that increases the completeness and accuracy of research cluster identification and analysis to understand conceptualization, antecedents, and measurement of resilience. It also enables us to perform a comprehensive review of resilience research in SCRM literature by including research articles published during the pandemic and connects this development with a plethora of articles published in the last two decades. From the managerial perspective, this study provides practitioners with clarity on the conceptualization and critical success factors of firm resilience from the SCRM perspective.

Keywords: supply chain risk management, organizational resilience, scholarly network analysis, systematic literature review

Procedia PDF Downloads 54
472 Chemical and Biomolecular Detection at a Polarizable Electrical Interface

Authors: Nicholas Mavrogiannis, Francesca Crivellari, Zachary Gagnon

Abstract:

Development of low-cost, rapid, sensitive and portable biosensing systems are important for the detection and prevention of disease in developing countries, biowarfare/antiterrorism applications, environmental monitoring, point-of-care diagnostic testing and for basic biological research. Currently, the most established commercially available and widespread assays for portable point of care detection and disease testing are paper-based dipstick and lateral flow test strips. These paper-based devices are often small, cheap and simple to operate. The last three decades in particular have seen an emergence in these assays in diagnostic settings for detection of pregnancy, HIV/AIDS, blood glucose, Influenza, urinary protein, cardiovascular disease, respiratory infections and blood chemistries. Such assays are widely available largely because they are inexpensive, lightweight, and portable, are simple to operate, and a few platforms are capable of multiplexed detection for a small number of sample targets. However, there is a critical need for sensitive, quantitative and multiplexed detection capabilities for point-of-care diagnostics and for the detection and prevention of disease in the developing world that cannot be satisfied by current state-of-the-art paper-based assays. For example, applications including the detection of cardiac and cancer biomarkers and biothreat applications require sensitive multiplexed detection of analytes in the nM and pM range, and cannot currently be satisfied with current inexpensive portable platforms due to their lack of sensitivity, quantitative capabilities and often unreliable performance. In this talk, inexpensive label-free biomolecular detection at liquid interfaces using a newly discovered electrokinetic phenomenon known as fluidic dielectrophoresis (fDEP) is demonstrated. The electrokinetic approach involves exploiting the electrical mismatches between two aqueous liquid streams forced to flow side-by-side in a microfluidic T-channel. In this system, one fluid stream is engineered to have a higher conductivity relative to its neighbor which has a higher permittivity. When a “low” frequency (< 1 MHz) alternating current (AC) electrical field is applied normal to this fluidic electrical interface the fluid stream with high conductivity displaces into the low conductive stream. Conversely, when a “high” frequency (20MHz) AC electric field is applied, the high permittivity stream deflects across the microfluidic channel. There is, however, a critical frequency sensitive to the electrical differences between each fluid phase – the fDEP crossover frequency – between these two events where no fluid deflection is observed, and the interface remains fixed when exposed to an external field. To perform biomolecular detection, two streams flow side-by-side in a microfluidic T-channel: one fluid stream with an analyte of choice and an adjacent stream with a specific receptor to the chosen target. The two fluid streams merge and the fDEP crossover frequency is measured at different axial positions down the resulting liquid

Keywords: biodetection, fluidic dielectrophoresis, interfacial polarization, liquid interface

Procedia PDF Downloads 430
471 Determination of Slope of Hilly Terrain by Using Proposed Method of Resolution of Forces

Authors: Reshma Raskar-Phule, Makarand Landge, Saurabh Singh, Vijay Singh, Jash Saparia, Shivam Tripathi

Abstract:

For any construction project, slope calculations are necessary in order to evaluate constructability on the site, such as the slope of parking lots, sidewalks, and ramps, the slope of sanitary sewer lines, slope of roads and highways. When slopes and grades are to be determined, designers are concerned with establishing proper slopes and grades for their projects to assess cut and fill volume calculations and determine inverts of pipes. There are several established instruments commonly used to determine slopes, such as Dumpy level, Abney level or Hand Level, Inclinometer, Tacheometer, Henry method, etc., and surveyors are very familiar with the use of these instruments to calculate slopes. However, they have some other drawbacks which cannot be neglected while major surveying works. Firstly, it requires expert surveyors and skilled staff. The accessibility, visibility, and accommodation to remote hilly terrain with these instruments and surveying teams are difficult. Also, determination of gentle slopes in case of road and sewer drainage constructions in congested urban places with these instruments is not easy. This paper aims to develop a method that requires minimum field work, minimum instruments, no high-end technology or instruments or software, and low cost. It requires basic and handy surveying accessories like a plane table with a fixed weighing machine, standard weights, alidade, tripod, and ranging rods should be able to determine the terrain slope in congested areas as well as in remote hilly terrain. Also, being simple and easy to understand and perform the people of that local rural area can be easily trained for the proposed method. The idea for the proposed method is based on the principle of resolution of weight components. When any object of standard weight ‘W’ is placed on an inclined surface with a weighing machine below it, then its cosine component of weight is presently measured by that weighing machine. The slope can be determined from the relation between the true or actual weight and the apparent weight. A proper procedure is to be followed, which includes site location, centering and sighting work, fixing the whole set at the identified station, and finally taking the readings. A set of experiments for slope determination, mild and moderate slopes, are carried out by the proposed method and by the theodolite instrument in a controlled environment, on the college campus, and uncontrolled environment actual site. The slopes determined by the proposed method were compared with those determined by the established instruments. For example, it was observed that for the same distances for mild slope, the difference in the slope obtained by the proposed method and by the established method ranges from 4’ for a distance of 8m to 2o15’20” for a distance of 16m for an uncontrolled environment. Thus, for mild slopes, the proposed method is suitable for a distance of 8m to 10m. The correlation between the proposed method and the established method shows a good correlation of 0.91 to 0.99 for various combinations, mild and moderate slope, with the controlled and uncontrolled environment.

Keywords: surveying, plane table, weight component, slope determination, hilly terrain, construction

Procedia PDF Downloads 71
470 Blue Economy and Marine Mining

Authors: Fani Sakellariadou

Abstract:

The Blue Economy includes all marine-based and marine-related activities. They correspond to established, emerging as well as unborn ocean-based industries. Seabed mining is an emerging marine-based activity; its operations depend particularly on cutting-edge science and technology. The 21st century will face a crisis in resources as a consequence of the world’s population growth and the rising standard of living. The natural capital stored in the global ocean is decisive for it to provide a wide range of sustainable ecosystem services. Seabed mineral deposits were identified as having a high potential for critical elements and base metals. They have a crucial role in the fast evolution of green technologies. The major categories of marine mineral deposits are deep-sea deposits, including cobalt-rich ferromanganese crusts, polymetallic nodules, phosphorites, and deep-sea muds, as well as shallow-water deposits including marine placers. Seabed mining operations may take place within continental shelf areas of nation-states. In international waters, the International Seabed Authority (ISA) has entered into 15-year contracts for deep-seabed exploration with 21 contractors. These contracts are for polymetallic nodules (18 contracts), polymetallic sulfides (7 contracts), and cobalt-rich ferromanganese crusts (5 contracts). Exploration areas are located in the Clarion-Clipperton Zone, the Indian Ocean, the Mid Atlantic Ridge, the South Atlantic Ocean, and the Pacific Ocean. Potential environmental impacts of deep-sea mining include habitat alteration, sediment disturbance, plume discharge, toxic compounds release, light and noise generation, and air emissions. They could cause burial and smothering of benthic species, health problems for marine species, biodiversity loss, reduced photosynthetic mechanism, behavior change and masking acoustic communication for mammals and fish, heavy metals bioaccumulation up the food web, decrease of the content of dissolved oxygen, and climate change. An important concern related to deep-sea mining is our knowledge gap regarding deep-sea bio-communities. The ecological consequences that will be caused in the remote, unique, fragile, and little-understood deep-sea ecosystems and inhabitants are still largely unknown. The blue economy conceptualizes oceans as developing spaces supplying socio-economic benefits for current and future generations but also protecting, supporting, and restoring biodiversity and ecological productivity. In that sense, people should apply holistic management and make an assessment of marine mining impacts on ecosystem services, including the categories of provisioning, regulating, supporting, and cultural services. The variety in environmental parameters, the range in sea depth, the diversity in the characteristics of marine species, and the possible proximity to other existing maritime industries cause a span of marine mining impact the ability of ecosystems to support people and nature. In conclusion, the use of the untapped potential of the global ocean demands a liable and sustainable attitude. Moreover, there is a need to change our lifestyle and move beyond the philosophy of single-use. Living in a throw-away society based on a linear approach to resource consumption, humans are putting too much pressure on the natural environment. Applying modern, sustainable and eco-friendly approaches according to the principle of circular economy, a substantial amount of natural resource savings will be achieved. Acknowledgement: This work is part of the MAREE project, financially supported by the Division VI of IUPAC. This work has been partly supported by the University of Piraeus Research Center.

Keywords: blue economy, deep-sea mining, ecosystem services, environmental impacts

Procedia PDF Downloads 67
469 Positioning Mama Mkubwa Indigenous Model into Social Work Practice through Alternative Child Care in Tanzania: Ubuntu Perspective

Authors: Johnas Buhori, Meinrad Haule Lembuka

Abstract:

Introduction: Social work expands its boundary to accommodate indigenous knowledge and practice for better competence and services. In Tanzania, Mama Mkubwa Mkubwa (MMM) (Mother’s elder sister) is an indigenous practice of alternative child care that represents other traditional practices across African societies known as Ubuntu practice. Ubuntu is African Humanism with values and approaches that are connected to the social work. MMM focuses on using the elder sister of a deceased mother or father, a trusted elder woman from the extended family or indigenous community to provide alternative care to an orphan or vulnerable child. In Ubuntu's perspective, it takes a whole village or community to raise a child, meaning that every person in the community is responsible for child care. Methodology: A desk review method guided by Ubuntu theory was applied to enrich the study. Findings: MMM resembles the Ubuntu ideal of traditional child protection of those in need as part of alternative child care throughout Tanzanian history. Social work practice, along with other formal alternative child care, was introduced in Tanzania during the colonial era in 1940s and socio-economic problems of 1980s affected the country’s formal social welfare system, and suddenly HIV/AIDS pandemic triggered the vulnerability of children and hampered the capacity of the formal sector to provide social welfare services, including alternative child care. For decades, AIDS has contributed to an influx of orphans and vulnerable children that facilitated the re-emerging of traditional alternative child care at the community level, including MMM. MMM strongly practiced in regions where the AIDS pandemic affected the community, like Njombe, Coastal region, Kagera, etc. Despite of existing challenges, MMM remained to be the remarkably alternative child care practiced in both rural and urban communities integrated with social welfare services. Tanzania envisions a traditional mechanism of family or community environment for alternative child care with the notion that sometimes institutionalization care fails to offer children all they need to become productive members of society, and later, it becomes difficult to reconnect in the society. Implications to Social Work: MMM is compatible with social work by using strengths perspectives; MMM reflects Ubuntu's perspective on the ground of humane social work, using humane methods to achieve human goals. MMM further demonstrates the connectedness of those who care and those cared for and the inextricable link between them as Ubuntu-inspired models of social work that view children from family, community, environmental, and spiritual perspectives. Conclusion: Social work and MMM are compatible at the micro and mezzo levels; thus, application of MMM can be applied in social work practice beyond Tanzania when properly designed and integrated into other systems. When MMM is applied in social work, alternative care has the potential to support not only children but also empower families and communities. Since MMM is a community-owned and voluntary base, it can relieve the government, social workers, and other formal sectors from the annual burden of cost in the provision of institutionalized alternative child care.

Keywords: ubuntu, indigenous social work, african social work, ubuntu social work, child protection, child alternative care

Procedia PDF Downloads 50
468 Company's Orientation and Human Resource Management Evolution in Technological Startup Companies

Authors: Yael Livneh, Shay Tzafrir, Ilan Meshoulam

Abstract:

Technological startup companies have been recognized as bearing tremendous potential for business and economic success. However, many entrepreneurs who produce promising innovative ideas fail to implement them as successful businesses. A key argument for such failure is the entrepreneurs' lack of competence in adaptation of the relevant level of formality of human resource management (HRM). The purpose of the present research was to examine multiple antecedents and consequences of HRM formality in growing startup companies. A review of the research literature identified two central components of HRM formality: HR control and professionalism. The effect of three contextual predictors was examined. The first was an intra-organizational factor: the development level of the organization. We based on a differentiation between knowledge exploration and knowledge exploitation. At a given time, the organization chooses to focus on a specific mix of these orientations, a choice which requires an appropriate level of HRM formality, in order to efficiently overcome the challenges. It was hypothesized that the mix of orientations of knowledge exploration and knowledge exploitation would predict HRM formality. The second predictor was the personal characteristics the organization's leader. According the idea of blueprint effect of CEO's on HRM, it was hypothesized that the CEO's cognitive style would predict HRM formality. The third contextual predictor was an external organizational factor: the level of investor involvement. By using the agency theory, and based on Transaction Cost Economy, it was hypothesized that the level of investor involvement in general management and HRM would be positively related to the HRM formality. The effect of formality on trust was examined directly and indirectly by the mediation role of procedural justice. The research method included a time-lagged field study. In the first study, data was obtained using three questionnaires, each directed to a different source: CEO, HR position-holder and employees. 43 companies participated in this study. The second study was conducted approximately a year later. Data was recollected using three questionnaires by reapplying the same sample. 41 companies participated in the second study. The organizations samples included technological startup companies. Both studies included 884 respondents. The results indicated consistency between the two studies. HRM formality was predicted by the intra-organizational factor as well as the personal characteristics of the CEO, but not at all by the external organizational context. Specifically, the organizational orientations was the greatest contributor to both components of HRM formality. The cognitive style predicted formality to a lesser extent. The investor's involvement was found not to have any predictive effect on the HRM formality. The results indicated a positive contribution to trust in HRM, mainly via the mediation of procedural justice. This study contributed a new concept for technological startup company development by a mixture of organizational orientation. Practical implications indicated that the level of HRM formality should be matched to that of the company's development. This match should be challenged and adjusted periodically by referring to the organization orientation, relevant HR practices, and HR function characteristics. A relevant matching could enhance further trust and business success.

Keywords: control, formality, human resource management, organizational development, professionalism, technological startup company

Procedia PDF Downloads 244
467 The Background of Ornamental Design Practice: Theory and Practice Based Research on Ornamental Traditions

Authors: Jenna Pyorala

Abstract:

This research looks at the principles and purposes ornamental design has served in the field of textile design. Ornamental designs are characterized by richness of details, abundance of elements, vegetative motifs and organic forms that flow harmoniously in complex compositions. Research on ornamental design is significant, because ornaments have been overlooked and considered as less meaningful and aesthetically pleasing than minimalistic, modern designs. This is despite the fact that in many parts of the world ornaments have been an important part of the cultural identification and expression for centuries. Ornament has been claimed to be superficial and merely used as a decorative way to hide the faults of designs. Such generalization is an incorrect interpretation of the real purposes of ornament. Many ornamental patterns tell stories, present mythological scenes or convey symbolistic meanings. Historically, ornamental decorations have been representing ideas and characteristics such as abundance, wealth, power and personal magnificence. The production of fine ornaments required refined skill, eye for intricate detail and perseverance while compiling complex elements into harmonious compositions. For this reason, ornaments have played an important role in the advancement of craftsmanship. Even though it has been claimed that people in the western design world have lost the relationship to ornament, the relation to it has merely changed from the practice of a craftsman to conceptualisation of a designer. With the help of new technological tools the production of ornaments has become faster and more efficient, demanding less manual labour. Designers who commit to this style of organic forms and vegetative motifs embrace and respect nature by representing its organically growing forms and by following its principles. The complexity of the designs is used as a way to evoke a sense of extraordinary beauty and stimulate intellect by freeing the mind from the predetermined interpretations. Through the study of these purposes it can be demonstrated that complex and richer design styles are as valuable a part of the world of design as more modern design approaches. The study highlights the meaning of ornaments by presenting visual examples and literature research findings. The practice based part of the project is the visual analysis of historical and cultural ornamental traditions such as Indian Chikan embroidery, Persian carpets, Art Nouveau and Rococo according to the rubric created for the purpose. The next step is the creation of ornamental designs based on the key elements in different styles. Theoretical and practical parts are woven together in this study that respects respect the long traditions of ornaments and highlight the importance of these design approaches to the field, in contrast to the more commonly preferred styles.

Keywords: cultural design traditions, ornamental design, organic forms from nature, textile design

Procedia PDF Downloads 211
466 The Development of Assessment Criteria Framework for Sustainable Healthcare Buildings in China

Authors: Chenyao Shen, Jie Shen

Abstract:

The rating system provides an effective framework for assessing building environmental performance and integrating sustainable development into building and construction processes; as it can be used as a design tool by developing appropriate sustainable design strategies and determining performance measures to guide the sustainable design and decision-making processes. Healthcare buildings are resource (water, energy, etc.) intensive. To maintain high-cost operations and complex medical facilities, they require a great deal of hazardous and non-hazardous materials, stringent control of environmental parameters, and are responsible for producing polluting emission. Compared with other types of buildings, the impact of healthcare buildings on the full cycle of the environment is particularly large. With broad recognition among designers and operators that energy use can be reduced substantially, many countries have set up their own green rating systems for healthcare buildings. There are four main green healthcare building evaluation systems widely acknowledged in the world - Green Guide for Health Care (GGHC), which was jointly organized by the United States HCWH and CMPBS in 2003; BREEAM Healthcare, issued by the British Academy of Building Research (BRE) in 2008; the Green Star-Healthcare v1 tool, released by the Green Building Council of Australia (GBCA) in 2009; and LEED Healthcare 2009, released by the United States Green Building Council (USGBC) in 2011. In addition, the German Association of Sustainable Building (DGNB) has also been developing the German Sustainable Building Evaluation Criteria (DGNB HC). In China, more and more scholars and policy makers have recognized the importance of assessment of sustainable development, and have adapted some tools and frameworks. China’s first comprehensive assessment standard for green building (the GBTs) was issued in 2006 (lately updated in 2014), promoting sustainability in the built-environment and raise awareness of environmental issues among architects, engineers, contractors as well as the public. However, healthcare building was not involved in the evaluation system of GBTs because of its complex medical procedures, strict requirements of indoor/outdoor environment and energy consumption of various functional rooms. Learn from advanced experience of GGHC, BREEAM, and LEED HC above, China’s first assessment criteria for green hospital/healthcare buildings was finally released in December 2015. Combined with both quantitative and qualitative assessment criteria, the standard highlight the differences between healthcare and other public buildings in meeting the functional needs for medical facilities and special groups. This paper has focused on the assessment criteria framework for sustainable healthcare buildings, for which the comparison of different rating systems is rather essential. Descriptive analysis is conducted together with the cross-matrix analysis to reveal rich information on green assessment criteria in a coherent manner. The research intends to know whether the green elements for healthcare buildings in China are different from those conducted in other countries, and how to improve its assessment criteria framework.

Keywords: assessment criteria framework, green building design, healthcare building, building performance rating tool

Procedia PDF Downloads 131
465 Non-Invasive Evaluation of Patients After Percutaneous Coronary Revascularization. The Role of Cardiac Imaging

Authors: Abdou Elhendy

Abstract:

Numerous study have shown the efficacy of the percutaneous intervention (PCI) and coronary stenting in improving left ventricular function and relieving exertional angina. Furthermore, PCI remains the main line of therapy in acute myocardial infarction. Improvement of procedural techniques and new devices have resulted in an increased number of PCI in those with difficult and extensive lesions, multivessel disease as well as total occlusion. Immediate and late outcome may be compromised by acute thrombosis or the development of fibro-intimal hyperplasia. In addition, progression of coronary artery disease proximal or distal to the stent as well as in non-stented arteries is not uncommon. As a result, complications can occur, such as acute myocardial infarction, worsened heart failure or recurrence of angina. In a stent, restenosis can occur without symptoms or with atypical complaints rendering the clinical diagnosis difficult. Routine invasive angiography is not appropriate as a follow up tool due to associated risk and cost and the limited functional assessment. Exercise and pharmacologic stress testing are increasingly used to evaluate the myocardial function, perfusion and adequacy of revascularization. Information obtained by these techniques provide important clues regarding presence and severity of compromise in myocardial blood flow. Stress echocardiography can be performed in conjunction with exercise or dobutamine infusion. The diagnostic accuracy has been moderate, but the results provide excellent prognostic stratification. Adding myocardial contrast agents can improve imaging quality and allows assessment of both function and perfusion. Stress radionuclide myocardial perfusion imaging is an alternative to evaluate these patients. The extent and severity of wall motion and perfusion abnormalities observed during exercise or pharmacologic stress are predictors of survival and risk of cardiac events. According to current guidelines, stress echocardiography and radionuclide imaging are considered to have appropriate indication among patients after PCI who have cardiac symptoms and those who underwent incomplete revascularization. Stress testing is not recommended in asymptomatic patients, particularly early after revascularization, Coronary CT angiography is increasingly used and provides high sensitive for the diagnosis of coronary artery stenosis. Average sensitivity and specificity for the diagnosis of in stent stenosis in pooled data are 79% and 81%, respectively. Limitations include blooming artifacts and low feasibility in patients with small stents or thick struts. Anatomical and functional cardiac imaging modalities are corner stone for the assessment of patients after PCI and provide salient diagnostic and prognostic information. Current imaging techniques cans serve as gate keeper for coronary angiography, thus limiting the risk of invasive procedures to those who are likely to benefit from subsequent revascularization. The determination of which modality to apply requires careful identification of merits and limitation of each technique as well as the unique characteristic of each individual patient.

Keywords: coronary artery disease, stress testing, cardiac imaging, restenosis

Procedia PDF Downloads 143
464 The Effect of Ionic Liquid Anion Type on the Properties of TiO2 Particles

Authors: Marta Paszkiewicz, Justyna Łuczak, Martyna Marchelek, Adriana Zaleska-Medynska

Abstract:

In recent years, photocatalytical processes have been intensively investigated for destruction of pollutants, hydrogen evolution, disinfection of water, air and surfaces, for the construction of self-cleaning materials (tiles, glass, fibres, etc.). Titanium dioxide (TiO2) is the most popular material used in heterogeneous photocatalysis due to its excellent properties, such as high stability, chemical inertness, non-toxicity and low cost. It is well known that morphology and microstructure of TiO2 significantly influence the photocatalytic activity. This characteristics as well as other physical and structural properties of photocatalysts, i.e., specific surface area or density of crystalline defects, could be controlled by preparation route. In this regard, TiO2 particles can be obtained by sol-gel, hydrothermal, sonochemical methods, chemical vapour deposition and alternatively, by ionothermal synthesis using ionic liquids (ILs). In the TiO2 particles synthesis ILs may play a role of a solvent, soft template, reagent, agent promoting reduction of the precursor or particles stabilizer during synthesis of inorganic materials. In this work, the effect of the ILs anion type on morphology and photoactivity of TiO2 is presented. The preparation of TiO2 microparticles with spherical structure was successfully achieved by solvothermal method, using tetra-tert-butyl orthotitatane (TBOT) as the precursor. The reaction process was assisted by an ionic liquids 1-butyl-3-methylimidazolium bromide [BMIM][Br], 1-butyl-3-methylimidazolium tetrafluoroborate [BMIM][BF4] and 1-butyl-3-methylimidazolium haxafluorophosphate [BMIM][PF6]. Various molar ratios of all ILs to TBOT (IL:TBOT) were chosen. For comparison, reference TiO2 was prepared using the same method without IL addition. Scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), Brenauer-Emmett-Teller surface area (BET), NCHS analysis, and FTIR spectroscopy were used to characterize the surface properties of the samples. The photocatalytic activity was investigated by means of phenol photodegradation in the aqueous phase as a model pollutant, as well as formation of hydroxyl radicals based on detection of fluorescent product of coumarine hydroxylation. The analysis results showed that the TiO2 microspheres had spherical structure with the diameters ranging from 1 to 6 µm. The TEM micrographs gave a bright observation of the samples in which the particles were comprised of inter-aggregated crystals. It could be also observed that the IL-assisted TiO2 microspheres are not hollow, which provides additional information about possible formation mechanism. Application of the ILs results in rise of the photocatalytic activity as well as BET surface area of TiO2 as compared to pure TiO2. The results of the formation of 7-hydroxycoumarin indicated that the increased amount of ·OH produced at the surface of excited TiO2 for samples TiO2_ILs well correlated with more efficient degradation of phenol. NCHS analysis showed that ionic liquids remained on the TiO2 surface confirming structure directing role of that compounds.

Keywords: heterogeneous photocatalysis, IL-assisted synthesis, ionic liquids, TiO2

Procedia PDF Downloads 257
463 Production of Medicinal Bio-active Amino Acid Gamma-Aminobutyric Acid In Dairy Sludge Medium

Authors: Farideh Tabatabaee Yazdi, Fereshteh Falah, Alireza Vasiee

Abstract:

Introduction: Gamma-aminobutyric acid (GABA) is a non-protein amino acid that is widely present in organisms. GABA is a kind of pharmacological and biological component and its application is wide and useful. Several important physiological functions of GABA have been characterized, such as neurotransmission and induction of hypotension. GABA is also a strong secretagogue of insulin from the pancreas and effectively inhibits small airway-derived lung adenocarcinoma and tranquilizer. Many microorganisms can produce GABA, and lactic acid bacteria have been a focus of research in recent years because lactic acid bacteria possess special physiological activities and are generally regarded as safe. Among them, the Lb. Brevis produced the highest amount of GABA. The major factors affecting GABA production have been characterized, including carbon sources and glutamate concentration. The use of food industry waste to produce valuable products such as amino acids seems to be a good way to reduce production costs and prevent the waste of food resources. In a dairy factory, a high volume of sludge is produced from a separator that contains useful compounds such as growth factors, carbon, nitrogen, and organic matter that can be used by different microorganisms such as Lb.brevis as carbon and nitrogen sources. Therefore, it is a good source of GABA production. GABA is primarily formed by the irreversible α-decarboxylation reaction of L-glutamic acid or its salts, catalysed by the GAD enzyme. In the present study, this aim was achieved for the fast-growing of Lb.brevis and producing GABA, using the dairy industry sludge as a suitable growth medium. Lactobacillus Brevis strains obtained from Microbial Type Culture Collection (MTCC) were used as model strains. In order to prepare dairy sludge as a medium, sterilization should be done at 121 ° C for 15 minutes. Lb. Brevis was inoculated to the sludge media at pH=6 and incubated for 120 hours at 30 ° C. After fermentation, the supernatant solution is centrifuged and then, the GABA produced was analyzed by the Thin Layer chromatography (TLC) method qualitatively and by the high-performance liquid chromatography (HPLC) method quantitatively. By increasing the percentage of dairy sludge in the culture medium, the amount of GABA increased. Also, evaluated the growth of bacteria in this medium showed the positive effect of dairy sludge on the growth of Lb.brevis, which resulted in the production of more GABA. GABA-producing LAB offers the opportunity of developing naturally fermented health-oriented products. Although some GABA-producing LAB has been isolated to find strains suitable for different fermentations, further screening of various GABA-producing strains from LAB, especially high-yielding strains, is necessary. The production of lactic acid, bacterial gamma-aminobutyric acid, is safe and eco-friendly. The use of dairy industry waste causes enhanced environmental safety. Also provides the possibility of producing valuable compounds such as GABA. In general, dairy sludge is a suitable medium for the growth of Lactic Acid Bacteria and produce this amino acid that can reduce the final cost of it by providing carbon and nitrogen source.

Keywords: GABA, Lactobacillus, HPLC, dairy sludge

Procedia PDF Downloads 121
462 Modelling of Air-Cooled Adiabatic Membrane-Based Absorber for Absorption Chillers Using Low Temperature Solar Heat

Authors: M. Venegas, M. De Vega, N. García-Hernando

Abstract:

Absorption cooling chillers have received growing attention over the past few decades as they allow the use of low-grade heat to produce the cooling effect. The combination of this technology with solar thermal energy in the summer period can reduce the electricity consumption peak due to air-conditioning. One of the main components, the absorber, is designed for simultaneous heat and mass transfer. Usually, shell and tubes heat exchangers are used, which are large and heavy. Cooling water from a cooling tower is conventionally used to extract the heat released during the absorption and condensation processes. These are clear inconvenient for the generalization of the absorption technology use, limiting its benefits in the contribution to the reduction in CO2 emissions, particularly for the H2O-LiBr solution which can work with low heat temperature sources as provided by solar panels. In the present work a promising new technology is under study, consisting in the use of membrane contactors in adiabatic microchannel mass exchangers. The configuration here proposed consists in one or several modules (depending on the cooling capacity of the chiller) that contain two vapour channels, separated from the solution by adjacent microporous membranes. The solution is confined in rectangular microchannels. A plastic or synthetic wall separates the solution channels between them. The solution entering the absorber is previously subcooled using ambient air. In this way, the need for a cooling tower is avoided. A model of the configuration proposed is developed based on mass and energy balances and some correlations were selected to predict the heat and mass transfer coefficients. The concentration and temperatures along the channels cannot be explicitly determined from the set of equations obtained. For this reason, the equations were implemented in a computer code using Engineering Equation Solver software, EES™. With the aim of minimizing the absorber volume to reduce the size of absorption cooling chillers, the ratio between the cooling power of the chiller and the absorber volume (R) is calculated. Its variation is shown along the solution channels, allowing its optimization for selected operating conditions. For the case considered the solution channel length is recommended to be lower than 3 cm. Maximum values of R obtained in this work are higher than the ones found in optimized horizontal falling film absorbers using the same solution. Results obtained also show the variation of R and the chiller efficiency (COP) for different ambient temperatures and desorption temperatures typically obtained using flat plate solar collectors. The configuration proposed of adiabatic membrane-based absorber using ambient air to subcool the solution is a good technology to reduce the size of the absorption chillers, allowing the use of low temperature solar heat and avoiding the need for cooling towers.

Keywords: adiabatic absorption, air-cooled, membrane, solar thermal energy

Procedia PDF Downloads 265
461 The Effectiveness of Congressional Redistricting Commissions: A Comparative Approach Investigating the Ability of Commissions to Reduce Gerrymandering with the Wilcoxon Signed-Rank Test

Authors: Arvind Salem

Abstract:

Voters across the country are transferring the power of redistricting from the state legislatures to commissions to secure “fairer” districts by curbing the influence of gerrymandering on redistricting. Gerrymandering, intentionally drawing distorted districts to achieve political advantage, has become extremely prevalent, generating widespread voter dissatisfaction and resulting in states adopting commissions for redistricting. However, the efficacy of these commissions is dubious, with some arguing that they constitute a panacea for gerrymandering, while others contend that commissions have relatively little effect on gerrymandering. A result showing that commissions are effective would allay these fears, supplying ammunition for activists across the country to advocate for commissions in their state and reducing the influence of gerrymandering across the nation. However, a result against commissions may reaffirm doubts about commissions and pressure lawmakers to make improvements to commissions or even abandon the commission system entirely. Additionally, these commissions are publicly funded: so voters have a financial interest and responsibility to know if these commissions are effective. Currently, nine states place commissions in charge of redistricting, Arizona, California, Colorado, Michigan, Idaho, Montana, Washington, and New Jersey (Hawaii also has a commission but will be excluded for reasons mentioned later). This study compares the degree of gerrymandering in the 2022 election (“after”) to the election in which voters decided to adopt commissions (“before”). The before-election provides a valuable benchmark for assessing the efficacy of commissions since voters in those elections clearly found the districts to be unfair; therefore, comparing the current election to that one is a good way to determine if commissions have improved the situation. At the time Hawaii adopted commissions, it was merely a single at-large district, so it is before metrics could not be calculated, and it was excluded. This study will use three methods to quantify the degree of gerrymandering: the efficiency gap, the percentage of seats and the percentage of votes difference, and the mean-median difference. Each of these metrics has unique advantages and disadvantages, but together, they form a balanced approach to quantifying gerrymandering. The study uses a Wilcoxon Signed-Rank Test with a null hypothesis that the value of the metrics is greater than or equal to after the election than before and an alternative hypothesis that the value of these metrics is greater in the before the election than after using a 0.05 significance level and an expected difference of 0. Accepting the alternative hypothesis would constitute evidence that commissions reduce gerrymandering to a statistically significant degree. However, this study could not conclude that commissions are effective. The p values obtained for all three metrics (p=0.42 for the efficiency gap, p=0.94 for the percentage of seats and percentage of votes difference, and p=0.47 for the mean-median difference) were extremely high and far from the necessary value needed to conclude that commissions are effective. These results halt optimism about commissions and should spur serious discussion about the effectiveness of these commissions and ways to change them moving forward so that they can accomplish their goal of generating fairer districts.

Keywords: commissions, elections, gerrymandering, redistricting

Procedia PDF Downloads 60
460 Inherent Difficulties in Countering Islamophobia

Authors: Imbesat Daudi

Abstract:

Islamophobia, which is a billion-dollar industry, is widespread, especially in the United States, Europe, India, Israel, and countries that have Muslim minorities at odds with their governmental policies. Hatred of Islam in the West did not evolve spontaneously; it was methodically created. Islamophobia's current format has been designed to spread on its own, find a space in the Western psyche, and resist its eradication. Hatred has been sustained by neoconservative ideologues and their allies, which are supported by the mainstream media. Social scientists have evaluated how ideas spread, why any idea can go viral, and where new ideas find space in our brains. This was possible because of the advances in the computational power of software and computers. Spreading of ideas, including Islamophobia, follows a sine curve; it has three phases: An initial exploratory phase with a long lag period, an explosive phase if ideas go viral, and the final phase when ideas find space in the human psyche. In the initial phase, the ideas are quickly examined in a center in the prefrontal lobe. When it is deemed relevant, it is sent for evaluation to another center of the prefrontal lobe; there, it is critically examined. Once it takes a final shape, the idea is sent as a final product to a center in the occipital lobe. This center cannot critically evaluate ideas; it can only defend them from its critics. Counterarguments, no matter how scientific, are automatically rejected. Therefore, arguments that could be highly effective in the early phases are counterproductive once they are stored in the occipital lobe. Anti-Islamophobic intellectuals have done a very good job of countering Islamophobic arguments. However, they have not been as effective as neoconservative ideologues who have promoted anti-Muslim rhetoric that was based on half-truths, misinformation, or outright lies. The failure is partly due to the support pro-war activists receive from the mainstream media, state institutions, mega-corporations engaged in violent conflicts, and think tanks that provide Islamophobic arguments. However, there are also scientific reasons why anti-Islamophobic thinkers have been less effective. There are different dynamics of spreading ideas once they are stored in the occipital lobe. The human brain is incapable of evaluating further once it accepts ideas as its own; therefore, a different strategy is required to be effective. This paper examines 1) why anti-Islamophobic intellectuals have failed in changing the minds of non-Muslims and 2) the steps of countering hatred. Simply put, a new strategy is needed that can effectively counteract hatred of Islam and Muslims. Islamophobia is a disease that requires strong measures. Fighting hatred is always a challenge, but if we understand why Islamophobia is taking root in the twenty-first century, one can succeed in challenging Islamophobic arguments. That will need a coordinated effort of Intellectuals, writers and the media.

Keywords: islamophobia, Islam and violence, anti-islamophobia, demonization of Islam

Procedia PDF Downloads 31
459 The Role of Social Media in the Rise of Islamic State in India: An Analytical Overview

Authors: Yasmeen Cheema, Parvinder Singh

Abstract:

The evolution of Islamic State (acronym IS) has an ultimate goal of restoring the caliphate. IS threat to the global security is main concern of international community but has also raised a factual concern for India about the regular radicalization of IS ideology among Indian youth. The incident of joining Arif Ejaz Majeed, an Indian as ‘jihadist’ in IS has set strident alarm in law & enforcement agencies. On 07.03.2017, many people were injured in an Improvised Explosive Device (IED) blast on-board of Bhopal Ujjain Express. One perpetrator of this incident was killed in encounter with police. But, the biggest shock is that the conspiracy was pre-planned and the assailants who carried out the blast were influenced by the ideology perpetrated by the Islamic State. This is the first time name of IS has cropped up in a terror attack in India. It is a red indicator of violent presence of IS in India, which is spreading through social media. The IS have the capacity to influence the younger Muslim generation in India through its brutal and aggressive propaganda videos, social media apps and hatred speeches. It is a well known fact that India is on the radar of IS, as well on its ‘Caliphate Map’. IS uses Twitter, Facebook and other social media platforms constantly. Islamic State has used enticing videos, graphics, and articles on social media and try to influence persons from India & globally that their jihad is worthy. According to arrested perpetrator of IS in different cases in India, the most of Indian youths are victims to the daydreams which are fondly shown by IS. The dreams that the Muslim empire as it was before 1920 can come back with all its power and also that the Caliph and its caliphate can be re-established are shown by the IS. Indian Muslim Youth gets attracted towards these euphemistic ideologies. Islamic State has used social media for disseminating its poisonous ideology, recruitment, operational activities and for future direction of attacks. IS through social media inspired its recruits & lone wolfs to continue to rely on local networks to identify targets and access weaponry and explosives. Recently, a pro-IS media group on its Telegram platform shows Taj Mahal as the target and suggested mode of attack as a Vehicle Born Improvised Explosive Attack (VBIED). Islamic State definitely has the potential to destroy the Indian national security & peace, if timely steps are not taken. No doubt, IS has used social media as a critical mechanism for recruitment, planning and executing of terror attacks. This paper will therefore examine the specific characteristics of social media that have made it such a successful weapon for Islamic State. The rise of IS in India should be viewed as a national crisis and handled at the central level with efficient use of modern technology.

Keywords: ideology, India, Islamic State, national security, recruitment, social media, terror attack

Procedia PDF Downloads 209
458 Biomimicked Nano-Structured Coating Elaboration by Soft Chemistry Route for Self-Cleaning and Antibacterial Uses

Authors: Elodie Niemiec, Philippe Champagne, Jean-Francois Blach, Philippe Moreau, Anthony Thuault, Arnaud Tricoteaux

Abstract:

Hygiene of equipment in contact with users is an important issue in the railroad industry. The numerous cleanings to eliminate bacteria and dirt cost a lot. Besides, mechanical solicitations on contact parts are observed daily. It should be interesting to elaborate on a self-cleaning and antibacterial coating with sufficient adhesion and good resistance against mechanical and chemical solicitations. Thus, a Hauts-de-France and Maubeuge Val-de-Sambre conurbation authority co-financed Ph.D. thesis has been set up since October 2017 based on anterior studies carried by the Laboratory of Ceramic Materials and Processing. To accomplish this task, a soft chemical route has been implemented to bring a lotus effect on metallic substrates. It involves nanometric liquid zinc oxide synthesis under 100°C. The originality here consists in a variation of surface texturing by modification of the synthesis time of the species in solution. This helps to adjust wettability. Nanostructured zinc oxide has been chosen because of the inherent photocatalytic effect, which can activate organic substance degradation. Two methods of heating have been compared: conventional and microwave assistance. Tested subtracts are made of stainless steel to conform to transport uses. Substrate preparation was the first step of this protocol: a meticulous cleaning of the samples is applied. The main goal of the elaboration protocol is to fix enough zinc-based seeds to make them grow during the next step as desired (nanorod shaped). To improve this adhesion, a silica gel has been formulated and optimized to ensure chemical bonding between substrate and zinc seeds. The last step consists of deposing a wide carbonated organosilane to improve the superhydrophobic property of the coating. The quasi-proportionality between the reaction time and the nanorod length will be demonstrated. Water Contact (superior to 150°) and Roll-off Angle at different steps of the process will be presented. The antibacterial effect has been proved with Escherichia Coli, Staphylococcus Aureus, and Bacillus Subtilis. The mortality rate is found to be four times superior to a non-treated substrate. Photocatalytic experiences were carried out from different dyed solutions in contact with treated samples under UV irradiation. Spectroscopic measurements allow to determinate times of degradation according to the zinc quantity available on the surface. The final coating obtained is, therefore, not a monolayer but rather a set of amorphous/crystalline/amorphous layers that have been characterized by spectroscopic ellipsometry. We will show that the thickness of the nanostructured oxide layer depends essentially on the synthesis time set in the hydrothermal growth step. A green, easy-to-process and control coating with self-cleaning and antibacterial properties has been synthesized with a satisfying surface structuration.

Keywords: antibacterial, biomimetism, soft-chemistry, zinc oxide

Procedia PDF Downloads 127
457 Phospholipid Cationic and Zwitterionic Compounds as Potential Non-Toxic Antifouling Agents: A Study of Biofilm Formation Assessed by Micro-titer Assays with Marine Bacteria and Eco-toxicological Effect on Marine Microalgae

Authors: D. Malouch, M. Berchel, C. Dreanno, S. Stachowski-Haberkorn, P-A. Jaffres

Abstract:

Biofouling is a complex natural phenomenon that involves biological, physical and chemical properties related to the environment, the submerged surface and the living organisms involved. Bio-colonization of artificial structures can cause various economic and environmental impacts. The increase in costs associated with the over-consumption of fuel from biocolonized vessels has been widely studied. Measurement drifts from submerged sensors, as well as obstructions in heat exchangers, and deterioration of offshore structures are major difficulties that industries are dealing with. Therefore, surfaces that inhibit biocolonization are required in different areas (water treatment, marine paints, etc.) and many efforts have been devoted to produce efficient and eco-compatible antifouling agents. The different steps of surface fouling are widely described in literature. Studying the biofilm and its stages provides a better understanding of how to elaborate more efficient antifouling strategies. Several approaches are currently applied, such as the use of biocide anti-fouling paint (mainly with copper derivatives) and super-hydrophobic coatings. While these two processes are proving to be the most effective, they are not entirely satisfactory, especially in a context of a changing legislation. Nowadays, the challenge is to prevent biofouling with non-biocide compounds, offering a cost effective solution, but with no toxic effects on marine organisms. Since the micro-fouling phase plays an important role in the regulation of the following steps of biofilm formation, it is desired to reduce or delate biofouling of a given surface by inhibiting the micro-fouling at its early stages. In our recent works, we reported that some amphiphilic compounds exhibited bacteriostatic or bactericidal properties at a concentration that did not affect mammalian eukaryotic cells. These remarkable properties invited us to assess this type of bio-inspired phospholipids to prevent the colonization of surfaces by marine bacteria. Of note, other studies reported that amphiphilic compounds interacted with bacteria leading to a reduction of their development. An amphiphilic compound is a molecule consisting of a hydrophobic domain and a polar head (ionic or non-ionic). These compounds appear to have interesting antifouling properties: some ionic compounds have shown antimicrobial activity, and zwitterions can reduce nonspecific adsorption of proteins. Herein, we investigate the potential of amphiphilic compounds as inhibitors of bacterial growth and marine biofilm formation. The aim of this study is to compare the efficacy of four synthetic phospholipids that features a cationic charge or a zwitterionic polar-head group to prevent microfouling with marine bacteria. Toxicity of these compounds was also studied in order to identify the most promising compounds that inhibit biofilm development and show low cytotoxicity on two links representative of coastal marine food webs: phytoplankton and oyster larvae.

Keywords: amphiphilic phospholipids, biofilm, marine fouling, non-toxique assays

Procedia PDF Downloads 121
456 The SHIFT of Consumer Behavior from Fast Fashion to Slow Fashion: A Review and Research Agenda

Authors: Priya Nangia, Sanchita Bansal

Abstract:

As fashion cycles become more rapid, some segments of the fashion industry have adopted increasingly unsustainable production processes to keep up with demand and enhance profit margins. The growing threat to environmental and social wellbeing posed by unethical fast fashion practices and the need to integrate the targets of SDGs into this industry necessitates a shift in the fashion industry's unsustainable nature, which can only be accomplished in the long run if consumers support sustainable fashion by purchasing it. Fast fashion is defined as low-cost, trendy apparel that takes inspiration from the catwalk or celebrity culture and rapidly transforms it into garments at high-street stores to meet consumer demand. Given the importance of identity formation to many consumers, the desire to be “fashionable” often outweighs the desire to be ethical or sustainable. This paradox exemplifies the tension between the human drive to consume and the will to do so in moderation. Previous research suggests that there is an attitude-behavior gap when it comes to determining consumer purchasing behavior, but to the best of our knowledge, no study has analysed how to encourage customers to shift from fast to slow fashion. Against this backdrop, the aim of this study is twofold: first, to identify and examine the factors that impact consumers' decisions to engage in sustainable fashion, and second, the authors develop a comprehensive framework for conceptualizing and encouraging researchers and practitioners to foster sustainable consumer behavior. This study used a systematic approach to collect data and analyse literature. The approach included three key steps: review planning, review execution, and findings reporting. Authors identified the keywords “sustainable consumption” and “sustainable fashion” and retrieved studies from the Web of Science (WoS) (126 records) and Scopus database (449 records). To make the study more specific, the authors refined the subject area to management, business, and economics in the second step, retrieving 265 records. In the third step, the authors removed the duplicate records and manually reviewed the articles to examine their relevance to the research issue. The final 96 research articles were used to develop this study's systematic scheme. The findings indicate that societal norms, demographics, positive emotions, self-efficacy, and awareness all have an effect on customers' decisions to purchase sustainable apparel. The authors propose a framework, denoted by the acronym SHIFT, in which consumers are more likely to engage in sustainable behaviors when the message or context leverages the following factors: (s)social influence, (h)habit formation, (i)individual self, (f)feelings, emotions, and cognition, and (t)tangibility. Furthermore, the authors identify five broad challenges that encourage sustainable consumer behavior and use them to develop novel propositions. Finally, the authors discuss how the SHIFT framework can be used in practice to drive sustainable consumer behaviors. This research sought to define the boundaries of existing research while also providing new perspectives on future research, with the goal of being useful for the development and discovery of new fields of study, thereby expanding knowledge.

Keywords: consumer behavior, fast fashion, sustainable consumption, sustainable fashion, systematic literature review

Procedia PDF Downloads 76
455 A Therapeutic Approach for Bromhidrosis with Glycopyrrolate 2% Cream: Clinical Study of 20 Patients

Authors: Vasiliki Markantoni, Eftychia Platsidaki, Georgios Chaidemenos, Georgios Kontochristopoulos

Abstract:

Introduction: Bromhidrosis, also known as osmidrosis, is a common distressing condition with a significant negative effect on patient’s quality of life. Its etiology is multifactorial. It usually affects axilla, genital skin, breasts and soles, areas where apocrine glands are mostly distributed. Therapeutic treatments include topical antibacterial agents, antiperspirants and neuromuscular blocker agents-toxins. In this study, we aimed to evaluate the efficacy and possible complications of topical glycopyrrolate, an anticholinergic agent, for treatment of bromhidrosis. Glycopyrrolate, applied topically as a cream, solution or spray at concentrations between 0,5% and 4%, has been successfully used to treat different forms of focal hyperhidrosis. Materials and Methods: Twenty patients, six males and fourteen females, meeting the criteria for bromhidrosis were treated with topical glycopyrrolate for two months. The average age was 36. Eleven patients had bromhidrosis located to the axillae, four to the soles, four to both axillae and soles and one to the genital folds. Glycopyrrolate was applied topically as a cream at concentration 2%, formulated in Fitalite. During the first month, patients were using the cream every night and thereafter twice daily. The degree of malodor was assessed subjectively by patients and scaled averagely as ‘none’, ‘mild’, ‘moderate’, and ‘severe’ with corresponding scores of 0, 1, 2, and 3, respectively. The modified Dermatology Life Quality Index (DLQI) was used to assess the quality of life. The clinical efficacy was graded by the patient scale of excellent, good, fair and poor. In the end, patients were given the power to evaluate whether they were totally satisfied with, partially satisfied or unsatisfied and possible side effects during the treatment were recorded. Results: All patients were satisfied at the end of the treatment. No patient defined the response as no improvement. The subjectively assessed score level of bromhidrosis was remarkably improved after the first month of treatment and improved slightly more after the second month. DLQI score was also improved to all patients. Adverse effects were reported in 2 patients. In the first case, topical irritation was reported. This was classed as mild (erythema and desquamation), appeared during the second month of treatment and was treated with low-potency topical corticosteroids. In the second case, mydriasis was reported, that recovered without specific treatment, as soon as we insisted to the importance of careful hygiene after cream application so as not to contaminate the periocular skin or ocular surface. Conclusions: Dermatologists often encounter patients with bromhidrosis, therefore should be aware of treatment options. To the best of our knowledge, this is the first study to evaluate the use of topical glycopyrrolate as a therapeutic approach for bromhidrosis. Our findings suggest that topical glycopyrrolate has an excellent safety profile and demonstrate encouraging results for the management of this distressful condition.

Keywords: Bromhidrosis, glycopyrrolate, topical treatment, osmidrosis

Procedia PDF Downloads 150
454 The Conflict of Grammaticality and Meaningfulness of the Corrupt Words: A Cross-lingual Sociolinguistic Study

Authors: Jayashree Aanand, Gajjam

Abstract:

The grammatical tradition in Sanskrit literature emphasizes the importance of the correct use of Sanskrit words or linguistic units (sādhu śabda) that brings the meritorious values, denying the attribution of the same religious merit to the incorrect use of Sanskrit words (asādhu śabda) or the vernacular or corrupt forms (apa-śabda or apabhraṁśa), even though they may help in communication. The current research, the culmination of the doctoral research on sentence definition, studies the difference among the comprehension of both correct and incorrect word forms in Sanskrit and Marathi languages in India. Based on the total of 19 experiments (both web-based and classroom-controlled) on approximately 900 Indian readers, it is found that while the incorrect forms in Sanskrit are comprehended with lesser accuracy than the correct word forms, no such difference can be seen for the Marathi language. It is interpreted that the incorrect word forms in the native language or in the language which is spoken daily (such as Marathi) will pose a lesser cognitive load as compared to the language that is not spoken on a daily basis but only used for reading (such as Sanskrit). The theoretical base for the research problem is as follows: among the three main schools of Language Science in ancient India, the Vaiyākaraṇas (Grammarians) hold that the corrupt word forms do have their own expressive power since they convey meaning, while as the Mimāṁsakas (the Exegesists) and the Naiyāyikas (the Logicians) believe that the corrupt forms can only convey the meaning indirectly, by recalling their association and similarity with the correct forms. The grammarians argue that the vernaculars that are born of the speaker’s inability to speak proper Sanskrit are regarded as degenerate versions or fallen forms of the ‘divine’ Sanskrit language and speakers who could not use proper Sanskrit or the standard language were considered as Śiṣṭa (‘elite’). The different ideas of different schools strictly adhere to their textual dispositions. For the last few years, sociolinguists have agreed that no variety of language is inherently better than any other; they are all the same as long as they serve the need of people that use them. Although the standard form of a language may offer the speakers some advantages, the non-standard variety is considered the most natural style of speaking. This is visible in the results. If the incorrect word forms incur the recall of the correct word forms in the reader as the theory suggests, it would have added one extra step in the process of sentential cognition leading to more cognitive load and less accuracy. This has not been the case for the Marathi language. Although speaking and listening to the vernaculars is the common practice and reading the vernacular is not, Marathi readers have readily and accurately comprehended the incorrect word forms in the sentences, as against the Sanskrit readers. The primary reason being Sanskrit is spoken and also read in the standard form only and the vernacular forms in Sanskrit are not found in the conversational data.

Keywords: experimental sociolinguistics, grammaticality and meaningfulness, Marathi, Sanskrit

Procedia PDF Downloads 110
453 Non-Time and Non-Sense: Temporalities of Addiction for Heroin Users in Scotland

Authors: Laura Roe

Abstract:

This study draws on twelve months of ethnographic fieldwork conducted in 2017 with heroin and poly-substance users in Scotland and explores experiences of time and temporality as factors in continuing drug use. The research largely took place over the year in which drug-related deaths in Scotland reached a record high, and were statistically recorded as the highest in Europe. This qualitative research is therefore significant in understanding both evolving patterns of drug use and the experiential lifeworlds of those who use heroin and other substances in high doses. Methodologies included participant observation, structured and semi-structured interviews, and unstructured conversations with twenty-two regular participants. The fieldwork was conducted in two needle exchanges, a community recovery group and in the community. The initial aim of the study was to assess evolving patterns of drug preferences in order to explore a clinical and user-reported rise in the use of novel psychoactive substances (NPS), which are typically considered to be highly potent, synthetic substances, often available at a low cost. It was found, however, that while most research participants had experimented with NPS with varying intensity, those who used every day regularly consumed heroin, methadone, and alcohol with benzodiazepines such as diazepam or anticonvulsants such as gabapentin. The research found that many participants deliberately pursued the non-fatal effects of overdose, aiming to induce states of dissociation, detachment and uneven consciousness, and did so by both mixing substances and experimenting with novel modes of consumption. Temporality was significant in the decision to consume cocktails of substances, as users described wishing to sever themselves from time; entering into states of ‘non-time’ and insensibility through specific modes of intoxication. Time and temporality similarly impacted other aspects of addicted life. Periods of attempted abstinence witnessed a slowing of time’s passage that was tied to affective states of boredom and melancholy, in addition to a disruptive return of distressing and difficult memories. Abject past memories frequently dominated and disrupted the present, which otherwise could be highly immersive due to the time and energy-consuming nature of seeking drugs while in financial difficulty. There was furthermore a discordance between individual user temporalities and the strict time-based regimes of recovery services and institutional bodies, and the study aims to highlight the impact of such a disjuncture on the efficacy of treatment programs. Many participants had difficulty in adhering to set appointments or temporal frameworks due to their specific temporal situatedness. Overall, exploring increasing tendencies of heroin users in Scotland towards poly-substance use, this study draws on experiences and perceptions of time, analysing how temporality comes to bear on the ways drugs are sought and consumed, and how recovery is imagined and enacted. The study attempts to outline the experiential, intimate and subjective worlds of heroin and poly-substance users while explicating the structural and historical factors that shape them.

Keywords: addiction, poly-substance use, temporality, timelessness

Procedia PDF Downloads 101
452 Biocultural Biographies and Molecular Memories: A Study of Neuroepigenetics and How Trauma Gets under the Skull

Authors: Elsher Lawson-Boyd

Abstract:

In the wake of the Human Genome Project, the life sciences have undergone some fascinating changes. In particular, conventional beliefs relating to gene expression are being challenged by advances in postgenomic sciences, especially by the field of epigenetics. Epigenetics is the modification of gene expression without changes in the DNA sequence. In other words, epigenetics dictates that gene expression, the process by which the instructions in DNA are converted into products like proteins, is not solely controlled by DNA itself. Unlike gene-centric theories of heredity that characterized much of the 20th Century (where the genes were considered as having almost god-like power to create life), gene expression in epigenetics insists on environmental ‘signals’ or ‘exposures’, a point that radically deviates from gene-centric thinking. Science and Technology Studies (STS) scholars have shown that epigenetic research is having vast implications for the ways in which chronic, non-communicable diseases are conceptualized, treated, and governed. However, to the author’s knowledge, there have not yet been any in-depth sociological engagements with neuroepigenetics that examine how the field is affecting mental health and trauma discourse. In this paper, the author discusses preliminary findings from a doctoral ethnographic study on neuroepigenetics, trauma, and embodiment. Specifically, this study investigates the kinds of causal relations neuroepigenetic researchers are making between experiences of trauma and the development of mental illnesses like complex post-traumatic stress disorder (PTSD), both throughout a human’s lifetime and across generations. Using qualitative interviews and nonparticipant observation, the author focuses on two public-facing research centers based in Melbourne: Florey Institute of Neuroscience and Mental Health (FNMH), and Murdoch Children’s Research Institute (MCRI). Preliminary findings indicate that a great deal of ambiguity characterizes this infant field, particularly when animal-model experiments are employed and the results are translated into human frameworks. Nevertheless, researchers at the FNMH and MCRI strongly suggest that adverse and traumatic life events have a significant effect on gene expression, especially when experienced during early development. Furthermore, they predict that neuroepigenetic research will have substantial implications for the ways in which mental illnesses like complex PTSD are diagnosed and treated. These preliminary findings shed light on why medical and health sociologists have good reason to be chiming in, engaging with and de-black-boxing ideations emerging from postgenomic sciences, as they may indeed have significant effects for vulnerable populations not only in Australia but other developing countries in the Global South.

Keywords: genetics, mental illness, neuroepigenetics, trauma

Procedia PDF Downloads 109