Search results for: risk perception
260 Conditionality of Aid as a Counterproductive Factor in Peacebuilding in the Afghan Context
Authors: Karimova Sitora Yuldashevna
Abstract:
The August 2021 resurgence of Taliban as a ruling force in Afghanistan once again challenged the global community into dealing with an unprecedentedly unlike-minded government. To express their disapproval of the new regime, Western governments and intergovernmental institutions have suspended their infrastructural projects and other forms of support. Moreover, the Afghan offshore reserves were frozen, and Afghanistan was disconnected from the international financial system, which impeded even independent aid agencies’ work. The already poor provision of aid was then further complicated with political conditionality. The purpose of this paper is to investigate the efficacy of conditional aid policy in the Afghan peacebuilding under Taliban rule and provide recommendations to international donors on further course of action. Arguing that conditionality of aid is a counterproductive factor in the peacebuilding process, this paper employs scholarly literature on peacebuilding alongside reports from International non-governmental organizations INGOs who operate directly in Afghanistan. The existing debate on peacebuilding in Afghanistan revolves around aid as a means of building democratic foundation for achieving peace on communal and national levels and why the previous attempts to do so were unsuccessful. This paper focuses on how to recalibrate the approach to aid provision and peacebuilding in the new reality. In the early 2000s, amid the weak Post-Cold War international will for a profound engagement in the conflict, humanitarian and development aid became the new means of achieving peace. Aid agencies provided resources directly to communities, minimizing the risk of local disputes. Through subsidizing education, governance reforms, and infrastructural projects, international aid accelerated school enrollment, introduced peace education, funded provincial council and parliamentary elections, and helped rebuild a conflict-torn country.When the Taliban seized power, the international community called on them to build an inclusive government based on respect for human rights, particularly girls’ and women’s schooling and work, as a condition to retain the aid flow. As the Taliban clearly failed to meet the demands, development aid was withdrawn. Some key United Nation agencies also refrained from collaborating with the de-facto authorities. However, contrary to the intended change in Talibs’ behavior, such a move has only led to further deprivation of those whom the donors strived to protect. This is because concern for civilians has always been the second priority for the warring parties. This paper consists of four parts. First, it describes the scope of the humanitarian crisis that began in Afghanistan in 2001. Second, it examines the previous peacebuilding attempts undertaken by the international community and the contribution that the international aid had in the peacebuilding process. Third, the paper describes the current regime and its relationships with the international donors. Finally, the paper concludes with recommendations for donors who would have to be more realistic and reconsider their priorities. While it is certainly not suggested that the Taliban regime is legitimized internationally, the crisis calls upon donors to be more flexible in collaborating with the de-facto authorities for the sake of the civilians.Keywords: Afghanistan, international aid, donors, peacebuilding
Procedia PDF Downloads 87259 Developing and Standardizing Individual Care Plan for Children in Conflict with Law in the State of Kerala
Authors: Kavitha Puthanveedu, Kasi Sekar, Preeti Jacob, Kavita Jangam
Abstract:
In India, The Juvenile Justice (Care and Protection of Children) Act, 2015, the law related to children alleged and found to be in conflict with law, proposes to address to the rehabilitation of children in conflict with law by catering to the basic rights by providing care and protection, development, treatment, and social re-integration. A major concern in addressing the issues of children in conflict with law in Kerala the southernmost state in India identified were: 1. Lack of psychological assessment for children in conflict with law, 2. Poor psychosocial intervention for children in conflict with law on bail, 3. Lack of psychosocial intervention or proper care and protection of CCL residing at observation and special home, 4. Lack convergence with systems related with mental health care. Aim: To develop individual care plan for children in conflict with law. Methodology: NIMHANS a premier Institute of Mental Health and Neurosciences, collaborated with Social Justice Department, Govt. of Kerala to address this issue by developing a participatory methodology to implement psychosocial care in the existing services by integrating the activities through multidisciplinary and multisectoral approach as per the Sec. 18 of JJAct 2015. Developing individual care plan: Key informant interviews, focus group discussion with multiple stakeholders consisting of legal officers, police, child protection officials, counselors, and home staff were conducted. Case studies were conducted among children in conflict with law. A checklist on 80 psychosocial problems among children in conflict with law was prepared with eight major issues identified through the quantitative process such as family and parental characteristic, family interactions and relationships, stressful life event, social and environmental factors, child’s individual characteristics, education, child labour and high-risk behavior. Standardised scales were used to identify the anxiety, caseness, suicidality and substance use among the children. This provided a background data understand the psychosocial problems experienced by children in conflict with law. In the second stage, a detailed plan of action was developed involving multiple stakeholders that include Special juvenile police unit, DCPO, JJB, and NGOs. The individual care plan was reviewed by a panel of 4 experts working in the area of children, followed by the review by multiple stakeholders in juvenile justice system such as Magistrates, JJB members, legal cum probation officers, district child protection officers, social workers and counselors. Necessary changes were made in the individual care plan in each stage which was pilot tested with 45 children for a period of one month and standardized for administering among children in conflict with law. Result: The individual care plan developed through scientific process was standardized and currently administered among children in conflict with law in the state of Kerala in the 3 districts that will be further implemented in other 14 districts. The program was successful in developing a systematic approach for the psychosocial intervention of children in conflict with law that can be a forerunner for other states in India.Keywords: psychosocial care, individual care plan, multidisciplinary, multisectoral
Procedia PDF Downloads 282258 Determination of Gross Alpha and Gross Beta Activity in Water Samples by iSolo Alpha/Beta Counting System
Authors: Thiwanka Weerakkody, Lakmali Handagiripathira, Poshitha Dabare, Thisari Guruge
Abstract:
The determination of gross alpha and beta activity in water is important in a wide array of environmental studies and these parameters are considered in international legislations on the quality of water. This technique is commonly applied as screening method in radioecology, environmental monitoring, industrial applications, etc. Measuring of Gross Alpha and Beta emitters by using iSolo alpha beta counting system is an adequate nuclear technique to assess radioactivity levels in natural and waste water samples due to its simplicity and low cost compared with the other methods. Twelve water samples (Six samples of commercially available bottled drinking water and six samples of industrial waste water) were measured by standard method EPA 900.0 consisting of the gas-less, firm wear based, single sample, manual iSolo alpha beta counter (Model: SOLO300G) with solid state silicon PIPS detector. Am-241 and Sr90/ Y90 calibration standards were used to calibrate the detector. The minimum detectable activities are 2.32mBq/L and 406mBq/L, for alpha and beta activity, respectively. Each of the 2L water samples was evaporated (at low heat) to a small volume and transferred into 50mm stainless steel counting planchet evenly (for homogenization) and heated by IR lamp and the constant weighted residue was obtained. Then the samples were counted for gross alpha and beta. Sample density on the planchet area was maintained below 5mg/cm. Large quantities of solid wastes sludges and waste water are generated every year due to various industries. This water can be reused for different applications. Therefore implementation of water treatment plants and measuring water quality parameters in industrial waste water discharge is very important before releasing them into the environment. This waste may contain different types of pollutants, including radioactive substances. All these measured waste water samples having gross alpha and beta activities, lower than the maximum tolerance limits for industrial waste water discharge of industrial waste in to inland surface water, that is 10-9µCi/mL and 10-8µCi/mL for gross alpha and beta respectively (National Environmental Act, No. 47 of 1980). This is according to extraordinary gazette of the democratic socialist republic of Sri Lanka in February 2008. The measured water samples were below the recommended radioactivity levels and do not pose any radiological hazard when releasing the environment. Drinking water is an essential requirement of life. All the drinking water samples were below the permissible levels of 0.5Bq/L for gross alpha activity and 1Bq/L for gross beta activity. The values have been proposed by World Health Organization in 2011; therefore the water is acceptable for consumption of humans without any further clarification with respect to their radioactivity. As these screening levels are very low, the individual dose criterion (IDC) would usually not be exceeded (0.1mSv y⁻¹). IDC is a criterion for evaluating health risks from long term exposure to radionuclides in drinking water. Recommended level of 0.1mSv/y expressed a very low level of health risk. This monitoring work will be continued further for environmental protection purposes.Keywords: drinking water, gross alpha, gross beta, waste water
Procedia PDF Downloads 198257 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads
Authors: Raja Umer Sajjad, Chang Hee Lee
Abstract:
Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters
Procedia PDF Downloads 240256 Identity and Mental Adaptation of Deaf and Hard-of-Hearing Students
Authors: N. F. Mikhailova, M. E. Fattakhova, M. A. Mironova, E. V. Vyacheslavova
Abstract:
For the mental and social adaptation of the deaf and hard-of-hearing people, cultural and social aspects - the formation of identity (acculturation) and educational conditions – are highly significant. We studied 137 deaf and hard-of-hearing students in different educational situations. We used these methods: Big Five (Costa & McCrae, 1997), TRF (Becker, 1989), WCQ (Lazarus & Folkman, 1988), self-esteem, and coping strategies (Jambor & Elliott, 2005), self-stigma scale (Mikhailov, 2008). Type of self-identification of students depended on the degree of deafness, type of education, method of communication in the family: large hearing loss, education in schools for deaf, and gesture communication increased the likelihood of a 'deaf' acculturation. Less hearing loss, inclusive education in public school or school for the hearing-impaired, mixed communication in the family contributed to the formation of 'hearing' acculturation. The choice of specific coping depended on the degree of deafness: a large hearing loss increased coping 'withdrawal into the deaf world' and decreased 'bicultural skills' coping. People with mild hearing loss tended to cover-up it. In the context of ongoing discussion, we researched personality characteristics in deaf and hard on-hearing students, coping and other deafness associated factors depending on their acculturation type. Students who identified themselves with the 'hearing world' had a high self-esteem, a higher level of extraversion, self-awareness, personal resources, willingness to cooperate, better psychological health, emotional stability, higher ability to empathy, a greater satiety of life with feelings and sense and high sense of self-worth. They also actively used strategies, problem-solving, acceptance of responsibility, positive revaluation. Student who limited themselves within the culture of deaf people had more severe hearing loss and accordingly had more communication barriers. Lack of use or seldom use of coping strategies by these students point at decreased level of stress in their life. Their self-esteem have not been challenged in the specific social environment of the students with the same severity of defect, and thus this environment provided sense of comfort (we can assume that from the high scores on psychological health, personality resources, and emotional stability). Students with bicultural acculturation had higher level of psychological resources - they used Positive Reappraisal coping more often and had a higher level of psychological health. Lack of belonging to certain culture (marginality) leads to personality disintegration, social and psychological disadaptation: deaf and hard-of-hearing students with marginal identification had a lower self-estimation level, worse psychological health and personal resources, lower level of extroversion, self-confidence and life satisfaction. They, in fact, become 'risk group' (many of them dropped out of universities, divorced, and one even ended up in the ranks of ISIS). All these data argue the importance of cultural 'anchor' for people with hearing deprivation. Supported by the RFBR No 19-013-00406.Keywords: acculturation, coping, deafness, marginality
Procedia PDF Downloads 204255 Barriers to Tuberculosis Detection in Portuguese Prisons
Authors: M. F. Abreu, A. I. Aguiar, R. Gaio, R. Duarte
Abstract:
Background: Prison establishments constitute high-risk environments for the transmission and spread of tuberculosis (TB), given their epidemiological context and the difficulty of implementing preventive and control measures. Guidelines for control and prevention of tuberculosis in prisons have been described as incomplete and heterogeneous internationally, due to several identified obstacles, for example scarcity of human resources and funding of prisoner health services. In Portugal, a protocol was created in 2014 with the aim to define and standardize procedures of detection and prevention of tuberculosis within prisons. Objective: The main objective of this study was to identify and describe barriers to tuberculosis detection in prisons of Porto and Lisbon districts in Portugal. Methods: A cross-sectional study was conducted from 2ⁿᵈ January 2018 till 30ᵗʰ June 2018. Semi-structured questionnaires were applied to health care professionals working in the prisons of the districts of Porto (n=6) and Lisbon (n=8). As inclusion criteria we considered having work experience in the area of tuberculosis (either in diagnosis, treatment, or follow up). The questionnaires were self-administered, in paper format. Descriptive analyses of the questionnaire variables were made using frequencies and median. Afterwards, a hierarchical agglomerative clusters analysis was performed. After obtaining the clusters, the chi-square test was applied to study the association between the variables collected and the clusters. The level of significance considered was 0.05. Results: From the total of 186 health professionals, 139 met the criteria of inclusion and 82 health professionals were interviewed (62,2% of participation). Most were female, nurses, with a median age of 34 years, with term employment contract. From the cluster analysis, two groups were identified with different characteristics and behaviors for the procedures of this protocol. Statistically significant results were found in: elements of cluster 1 (78% of the total participants) work in prisons for a longer time (p=0.003), 45,3% work > 4 years while 50% of the elements of cluster 2 work for less than a year, and more frequently answered they know and apply the procedures of the protocol (p=0.000). Both clusters answered frequently the need of having theoretical-practical training for TB (p=0.000), especially in the areas of diagnosis, treatment and prevention and that there is scarcity of funding to prisoner health services (p=0.000). Regarding procedures for TB screening (periodic and contact screening) and procedures for transferring a prisoner with this disease, cluster 1 also answered more frequently to perform them (p=0.000). They also referred that the material/equipment for TB screening is accessible and available (p=0.000). From this clusters we identified as barriers scarcity of human resources, the need to theoretical-practical training for tuberculosis, inexperience in working in health services prisons and limited knowledge of protocol procedures. Conclusions: The barriers found in this study are the same described internationally. This protocol is mostly being applied in portuguese prisons. The study also showed the need to invest in human and material resources. This investigation bridged gaps in knowledge that could help prison health services optimize the care provided for early detection and adherence of prisoners to treatment of tuberculosis.Keywords: barriers, health care professionals, prisons, protocol, tuberculosis
Procedia PDF Downloads 147254 Epidemiology of Healthcare-Associated Infections among Hematology/Oncology Patients: Results of a Prospective Incidence Survey in a Tunisian University Hospital
Authors: Ezzi Olfa, Bouafia Nabiha, Ammar Asma, Ben Cheikh Asma, Mahjoub Mohamed, Bannour Wadiaa, Achour Bechir, Khelif Abderrahim, Njah Mansour
Abstract:
Background: In hematology/oncology, health care improvement has allowed increasingly aggressive management in diagnostic and therapeutic procedures. Nevertheless, these intensified procedures have been associated with higher risk of healthcare associated infections (HAIs). We undertook this study to estimate the burden of HAIs in the cancer patients in an onco -hematology unit in a Tunisian university hospital. Materials/Methods: A prospective, observational study, based on active surveillance for a period of 06 months from Mars through September 2016, was undertaken in the department of onco-hematology in a university hospital in Tunisia. Patients, who stayed in the unit for ≥ 48 h, were followed until hospital discharge. The Centers for Disease Control and Prevention criteria (CDC) for site-specific infections were used as standard definitions for HAIs. Results: One hundred fifty patients were included in the study. The gender distribution was 33.3% for girls and 66.6% boys. They have a mean age of 23.12 years (SD = 18.36 years). The main patient’s diagnosis is: Acute Lymphoblastic Leukemia (ALL): 48.7 %( n=73). The mean length of stay was 21 days +/- 18 days. Almost 8% of patients had an implantable port (n= 12), 34.9 % (n=52) had a lumber puncture and 42.7 % (n= 64) had a medullary puncture. Chemotherapy was instituted in 88% of patients (n=132). Eighty (53.3%) patients had neutropenia at admission. The incidence rate of HAIs was 32.66 % per patient; the incidence density was 15.73 per 1000 patient-days in the unit. Mortality rate was 9.3% (n= 14), and 50% of cases of death were caused by HAIs. The most frequent episodes of infection were: infection of skin and superficial mucosa (5.3%), pulmonary aspergillosis (4.6%), Healthcare associated pneumonia (HAP) (4%), Central venous catheter associated infection (4%), digestive infection (5%), and primary bloodstream infection (2.6%). Finally, fever of unknown origin (FUO) incidence rate was 14%. In case of skin and superficial infection (n= 8), 4 episodes were documented, and organisms implicated were Escherichia.coli, Geotricum capitatum and Proteus mirabilis. For pulmonary aspergillosis, 6 cases were diagnosed clinically and radiologically, and one was proved by positive aspergillus antigen in bronchial aspiration. Only one patient died due this infection. In HAP (6 cases), four episodes were diagnosed clinically and radiologically. No bacterial etiology was established in these cases. Two patients died due to HAP. For primary bloodstream infection (4 cases), implicated germs were Enterobacter cloacae, Geotricum capitatum, klebsiella pneumoniae, and Streptococcus pneumoniae. Conclusion: This type of prospective study is an indispensable tool for internal quality control. It is necessary to evaluate preventive measures and design control guides and strategies aimed to reduce the HAI’s rate and the morbidity and mortality associated with infection in a hematology/oncology unit.Keywords: cohort prospective studies, healthcare associated infections, hematology oncology department, incidence
Procedia PDF Downloads 390253 Developing a Deep Understanding of the Immune Response in Hepatitis B Virus Infected Patients Using a Knowledge Driven Approach
Authors: Hanan Begali, Shahi Dost, Annett Ziegler, Markus Cornberg, Maria-Esther Vidal, Anke R. M. Kraft
Abstract:
Chronic hepatitis B virus (HBV) infection can be treated with nucleot(s)ide analog (NA), for example, which inhibits HBV replication. However, they have hardly any influence on the functional cure of HBV, which is defined by hepatitis B surface antigen (HBsAg) loss. NA needs to be taken life-long, which is not available for all patients worldwide. Additionally, NA-treated patients are still at risk of developing cirrhosis, liver failure, or hepatocellular carcinoma (HCC). Although each patient has the same components of the immune system, immune responses vary between patients. Therefore, a deeper understanding of the immune response against HBV in different patients is necessary to understand the parameters leading to HBV cure and to use this knowledge to optimize HBV therapies. This requires seamless integration of an enormous amount of diverse and fine-grained data from viral markers, e.g., hepatitis B core-related antigen (HBcrAg) and hepatitis B surface antigen (HBsAg). The data integration system relies on the assumption that profiling human immune systems requires the analysis of various variables (e.g., demographic data, treatments, pre-existing conditions, immune cell response, or HLA-typing) rather than only one. However, the values of these variables are collected independently. They are presented in a myriad of formats, e.g., excel files, textual descriptions, lab book notes, and images of flow cytometry dot plots. Additionally, patients can be identified differently in these analyses. This heterogeneity complicates the integration of variables, as data management techniques are needed to create a unified view in which individual formats and identifiers are transparent when profiling the human immune systems. The proposed study (HBsRE) aims at integrating heterogeneous data sets of 87 chronically HBV-infected patients, e.g., clinical data, immune cell response, and HLA-typing, with knowledge encoded in biomedical ontologies and open-source databases into a knowledge-driven framework. This new technique enables us to harmonize and standardize heterogeneous datasets in the defined modeling of the data integration system, which will be evaluated in the knowledge graph (KG). KGs are data structures that represent the knowledge and data as factual statements using a graph data model. Finally, the analytic data model will be applied on top of KG in order to develop a deeper understanding of the immune profiles among various patients and to evaluate factors playing a role in a holistic profile of patients with HBsAg level loss. Additionally, our objective is to utilize this unified approach to stratify patients for new effective treatments. This study is developed in the context of the project “Transforming big data into knowledge: for deep immune profiling in vaccination, infectious diseases, and transplantation (ImProVIT)”, which is a multidisciplinary team composed of computer scientists, infection biologists, and immunologists.Keywords: chronic hepatitis B infection, immune response, knowledge graphs, ontology
Procedia PDF Downloads 108252 Deterioration Prediction of Pavement Load Bearing Capacity from FWD Data
Authors: Kotaro Sasai, Daijiro Mizutani, Kiyoyuki Kaito
Abstract:
Expressways in Japan have been built in an accelerating manner since the 1960s with the aid of rapid economic growth. About 40 percent in length of expressways in Japan is now 30 years and older and has become superannuated. Time-related deterioration has therefore reached to a degree that administrators, from a standpoint of operation and maintenance, are forced to take prompt measures on a large scale aiming at repairing inner damage deep in pavements. These measures have already been performed for bridge management in Japan and are also expected to be embodied for pavement management. Thus, planning methods for the measures are increasingly demanded. Deterioration of layers around road surface such as surface course and binder course is brought about at the early stages of whole pavement deterioration process, around 10 to 30 years after construction. These layers have been repaired primarily because inner damage usually becomes significant after outer damage, and because surveys for measuring inner damage such as Falling Weight Deflectometer (FWD) survey and open-cut survey are costly and time-consuming process, which has made it difficult for administrators to focus on inner damage as much as they have been supposed to. As expressways today have serious time-related deterioration within them deriving from the long time span since they started to be used, it is obvious the idea of repairing layers deep in pavements such as base course and subgrade must be taken into consideration when planning maintenance on a large scale. This sort of maintenance requires precisely predicting degrees of deterioration as well as grasping the present situations of pavements. Methods for predicting deterioration are determined to be either mechanical or statistical. While few mechanical models have been presented, as far as the authors know of, previous studies have presented statistical methods for predicting deterioration in pavements. One describes deterioration process by estimating Markov deterioration hazard model, while another study illustrates it by estimating Proportional deterioration hazard model. Both of the studies analyze deflection data obtained from FWD surveys and present statistical methods for predicting deterioration process of layers around road surface. However, layers of base course and subgrade remain unanalyzed. In this study, data collected from FWD surveys are analyzed to predict deterioration process of layers deep in pavements in addition to surface layers by a means of estimating a deterioration hazard model using continuous indexes. This model can prevent the loss of information of data when setting rating categories in Markov deterioration hazard model when evaluating degrees of deterioration in roadbeds and subgrades. As a result of portraying continuous indexes, the model can predict deterioration in each layer of pavements and evaluate it quantitatively. Additionally, as the model can also depict probability distribution of the indexes at an arbitrary point and establish a risk control level arbitrarily, it is expected that this study will provide knowledge like life cycle cost and informative content during decision making process referring to where to do maintenance on as well as when.Keywords: deterioration hazard model, falling weight deflectometer, inner damage, load bearing capacity, pavement
Procedia PDF Downloads 390251 Quasi-Federal Structure of India: Fault-Lines Exposed in COVID-19 Pandemic
Authors: Shatakshi Garg
Abstract:
As the world continues to grapple with the COVID-19 pandemic, India, one of the most populous democratic federal developing nation, continues to report the highest active cases and deaths, as well as struggle to let its health infrastructure not succumb to the exponentially growing requirements of hospital beds, ventilators, oxygen to save thousands of lives daily at risk. In this context, the paper outlines the handling of the COVID-19 pandemic since it first hit India in January 2020 – the policy decisions taken by the Union and the State governments from the larger perspective of its federal structure. The Constitution of India adopted in 1950 enshrined the federal relations between the Union and the State governments by way of the constitutional division of revenue-raising and expenditure responsibilities. By way of the 72nd and 73rd Amendments in the Constitution, powers and functions were devolved further to the third tier, namely the local governments, with the intention of further strengthening the federal structure of the country. However, with time, several constitutional amendments have shifted the scales in favour of the union government. The paper briefly traces some of these major amendments as well as some policy decisions which made the federal relations asymmetrical. As a result, data on key fiscal parameters helps establish how the union government gained upper hand at the expense of weak state governments, reducing the local governments to mere constitutional bodies without adequate funds and fiscal autonomy to carry out the assigned functions. This quasi-federal structure of India with the union government amassing the majority of power in terms of ‘funds, functions and functionaries’ exposed the perils of weakening sub-national governments post COVID-19 pandemic. With a complex quasi-federal structure and a heterogeneous population of over 1.3 billion, the announcement of a sudden nationwide lockdown by the union government was followed by a plight of migrants struggling to reach homes safely in the absence of adequate arrangements for travel and safety-net made by the union government. With limited autonomy enjoyed by the states, they were mostly dictated by the union government on most aspects of handling the pandemic, including protocols for lockdown, re-opening post lockdown, and vaccination drive. The paper suggests that certain policy decisions like demonetization, the introduction of GST, etc., taken by the incumbent government since 2014 when they first came to power, have further weakened the states and local governments, which have amounted to catastrophic losses, both economic and human. The role of the executive, legislature and judiciary are explored to establish how all these three arms of the government have worked simultaneously to further weaken and expose the fault-lines of the federal structure of India, which has lent the nation incapacitated to handle this pandemic. The paper then suggests the urgency of re-looking at the federal structure of the country and undertaking measures that strengthen the sub-national governments and restore the federal spirit as was enshrined in the constitution to avoid mammoth human and economic losses from a pandemic of this sort.Keywords: COVID-19 pandemic, India, federal structure, economic losses
Procedia PDF Downloads 179250 Knowledge and Practices on Waste Disposal Management Among Medical Technology Students at National University – Manila
Authors: John Peter Dacanay, Edison Ramos, Cristopher James Dicang
Abstract:
Waste management is a global concern due to increasing waste production from changing consumption patterns and population growth. Proper waste disposal management is a critical aspect of public health and environmental protection. In the healthcare industry, medical waste is generated in large quantities, and if not disposed of properly, it poses a significant threat to human health and the environment. Efficient waste management conserves natural resources and prevents harm to human health, and implementing an effective waste management system can save human lives. The study aimed to assess the level of awareness and practices on waste disposal management, highlighting the understanding of proper disposal, potential hazards, and environmental implications among Medical Technology students. This would help to provide more recommendations for improving waste management practices in healthcare settings as well as for better waste management practices in educational institutions. From the collected data, a female of 21 years of age stands out among the respondents. With the frequency and percentage of medical technology students' knowledge of laboratory waste management being high, it indicates that all respondents demonstrated a solid understanding of proper disposal methods, regulations, risks, and handling procedures related to laboratory waste. That said, the findings emphasize the significance of education and awareness programs in equipping individuals involved in laboratory practices with the necessary knowledge to handle and dispose of hazardous and infectious waste properly. Most respondents demonstrate positive practices or are highly mannered in laboratory waste management, including proper segregation and disposal in designated containers. However, there are concerns about the occasional mixing of waste types, emphasizing the reiteration of proper waste segregation. Students show a strong commitment to using personal protective equipment and promptly cleaning up spills. Some students admit to improper disposal due to rushing, highlighting the importance of time management and safety prioritization. Overall, students follow protocols for hazardous waste disposal, indicating a responsible approach. The school's waste management system is perceived as adequate, but continuous assessment and improvement are necessary. Encouraging reporting of issues and concerns is crucial for ongoing improvement and risk mitigation. The analysis reveals a moderate positive relationship between the respondents' knowledge and practices regarding laboratory waste management. The statistically significant correlation with a p-value of 0.26 (p-value 0.05) suggests that individuals with higher levels of knowledge tend to exhibit better practices. These findings align with previous research emphasizing the pivotal role of knowledge in influencing individuals' behaviors and practices concerning laboratory waste management. When individuals possess a comprehensive understanding of proper procedures, regulations, and potential risks associated with laboratory waste, they are more inclined to adopt appropriate practices. Therefore, fostering knowledge through education and training is essential in promoting responsible and effective waste management in laboratory settings.Keywords: waste disposal management, knowledge, attitude, practices
Procedia PDF Downloads 101249 Assessment of Environmental Mercury Contamination from an Old Mercury Processing Plant 'Thor Chemicals' in Cato Ridge, KwaZulu-Natal, South Africa
Authors: Yohana Fessehazion
Abstract:
Mercury is a prominent example of a heavy metal contaminant in the environment, and it has been extensively investigated for its potential health risk in humans and other organisms. In South Africa, massive mercury contamination happened in1980s when the England-based mercury reclamation processing plant relocated to Cato Ridge, KwaZulu-Natal Province, and discharged mercury waste into the Mngceweni River. This mercury waste discharge resulted in high mercury concentration that exceeded the acceptable levels in Mngceweni River, Umgeni River, and human hair of the nearby villagers. This environmental issue raised the alarm, and over the years, several environmental assessments were reported the dire environmental crises resulting from the Thor Chemicals (now known as Metallica Chemicals) and urged the immediate removal of the around 3,000 tons of mercury waste stored in the factory storage facility over two decades. Recently theft of some containers with the toxic substance from the Thor Chemicals warehouse and the subsequent fire that ravaged the facility furtherly put the factory on the spot escalating the urgency of left behind deadly mercury waste removal. This project aims to investigate the mercury contamination leaking from an old Thor Chemicals mercury processing plant. The focus will be on sediments, water, terrestrial plants, and aquatic weeds such as the prominent water hyacinth weeds in the nearby water systems of Mngceweni River, Umgeni River, and Inanda Dam as a bio-indicator and phytoremediator for mercury pollution. Samples will be collected in spring around October when the condition is favourable for microbial activity to methylate mercury incorporated in sediments and blooming season for some aquatic weeds, particularly water hyacinth. Samples of soil, sediment, water, terrestrial plant, and aquatic weed will be collected per sample site from the point of source (Thor Chemicals), Mngceweni River, Umgeni River, and the Inanda Dam. One-way analysis of variance (ANOVA) tests will be conducted to determine any significant differences in the Hg concentration among all sampling sites, followed by Least Significant Difference post hoc test to determine if mercury contamination varies with the gradient distance from the source point of pollution. The flow injection atomic spectrometry (FIAS) analysis will also be used to compare the mercury sequestration between the different plant tissues (roots and stems). The principal component analysis is also envisaged for use to determine the relationship between the source of mercury pollution and any of the sampling points (Umgeni and Mngceweni Rivers and the Inanda Dam). All the Hg values will be expressed in µg/L or µg/g in order to compare the result with the previous studies and regulatory standards. Sediments are expected to have relatively higher levels of Hg compared to the soils, and aquatic macrophytes, water hyacinth weeds are expected to accumulate a higher concentration of mercury than terrestrial plants and crops.Keywords: mercury, phytoremediation, Thor chemicals, water hyacinth
Procedia PDF Downloads 223248 A Randomized Active Controlled Clinical Trial to Assess Clinical Efficacy and Safety of Tapentadol Nasal Spray in Moderate to Severe Post-Surgical Pain
Authors: Kamal Tolani, Sandeep Kumar, Rohit Luthra, Ankit Dadhania, Krishnaprasad K., Ram Gupta, Deepa Joshi
Abstract:
Background: Post-operative analgesia remains a clinical challenge, with central and peripheral sensitization playing a pivotal role in treatment-related complications and impaired quality of life. Centrally acting opioids offer poor risk benefit profile with increased intensity of gastrointestinal or central side effects and slow onset of clinical analgesia. The objective of this study was to assess the clinical feasibility of induction and maintenance therapy with Tapentadol Nasal Spray (NS) in moderate to severe acute post-operative pain. Methods: Phase III, randomized, active-controlled, non-inferiority clinical trial involving 294 cases who had undergone surgical procedures under general anesthesia or regional anesthesia. Post-surgery patients were randomized to receive either Tapentadol NS 45 mg or Tramadol 100mg IV as a bolus and subsequent 50 mg or 100 mg dose over 2-3 minutes. The frequency of administration of NS was at every 4-6 hours. At the end of 24 hrs, patients in the tramadol group who had a pain intensity score of ≥4 were switched to oral tramadol immediate release 100mg capsule until the pain intensity score reduced to <4. All patients who had achieved pain intensity ≤ 4 were shifted to a lower dose of either Tapentadol NS 22.5 mg or oral Tramadol immediate release 50mg capsule. The statistical analysis plan was envisaged as a non-inferiority trial involving comparison with Tramadol for Pain intensity difference at 60 minutes (PID60min), Sum of Pain intensity difference at 60 minutes (SPID60min), and Physician Global Assessment at 24 hrs (PGA24 hrs). Results: The per-protocol analyses involved 255 hospitalized cases undergoing surgical procedures. The median age of patients was 38.0 years. For the primary efficacy variables, Tapentadol NS was non-inferior to Inj/Oral Tramadol in relief of moderate to severe post-operative pain. On the basis of SPID60min, no clinically significant difference was observed between Tapentadol NS and Tramadol IV (1.73±2.24 vs. 1.64± 1.92, -0.09 [95% CI, -0.43, 0.60]). In the co-primary endpoint PGA24hrs, Tapentadol NS was non–inferior to Tramadol IV (2.12 ± 0.707 vs. 2.02 ±0.704, - 0.11[95% CI, -0.07, 0.28). However, on further assessment at 48hr, 72 hrs, and 120hrs, clinically superior pain relief was observed with the Tapentadol NS formulation that was statistically significant (p <0.05) at each of the time intervals. Secondary efficacy measures, including the onset of clinical analgesia and TOTPAR, showed non-inferiority to Tramadol. The safety profile and need for rescue medication were also similar in both the groups during the treatment period. The most common concomitant medications were anti-bacterial (98.3%). Conclusion: Tapentadol NS is a clinically feasible option for improved compliance as induction and maintenance therapy while offering a sustained and persistent patient response that is clinically meaningful in post-surgical settings.Keywords: tapentadol nasal spray, acute pain, tramadol, post-operative pain
Procedia PDF Downloads 249247 Systemic Family therapy in the Queensland Foster Care System: The implementation of Integrative Practice as a Purposeful Intervention Implemented with Complex ‘Family’ Systems
Authors: Rachel Jones
Abstract:
Systemic Family therapy in the Queensland Foster Care System is the implementation of Integrative Practice as a purposeful intervention implemented with complex ‘family’ systems (by expanding the traditional concept of family to include all relevant stakeholders for a child) and is shown to improve the overall wellbeing of children (with developmental delays and trauma) in Queensland out of home care contexts. The importance of purposeful integrative practice in the field of systemic family therapy has been highlighted in achieving change in complex family systems. Essentially, it is the purposeful use of multiple interventions designed to meet the myriad of competing needs apparent for a child (with developmental delays resulting from early traumatic experiences - both in utero and in their early years) and their family. In the out-of-home care context, integrative practice is particularly useful to promote positive change for the child and what is an extended concept of whom constitutes their family. Traditionally, a child’s family may have included biological and foster care family members, but when this concept is extended to include all their relevant stakeholders (including biological family, foster carers, residential care workers, child safety, school representatives, Health and Allied Health staff, police and youth justice staff), the use of integrative family therapy can produce positive change for the child in their overall wellbeing, development, risk profile, social and emotional functioning, mental health symptoms and relationships across domains. By tailoring therapeutic interventions that draw on systemic family therapies from the first and second-order schools of family therapy, neurobiology, solution focussed, trauma-informed, play and art therapy, and narrative interventions, disability/behavioural interventions, clinicians can promote change by mixing therapeutic modalities with the individual and their stakeholders. This presentation will unpack the implementation of systemic family therapy using this integrative approach to formulation and treatment for a child in out-of-home care in Queensland (experiencing developmental delays resulting from trauma). It considers the need for intervention for the individual and in the context of the environment and relationships. By reviewing a case example, this study aims to highlight the simultaneous and successful use of pharmacological interventions, psychoeducational programs for carers and school staff, parenting programs, cognitive-behavioural and trauma-informed interventions, traditional disability approaches, play therapy, mapping genograms and meaning-making, and using family and dyadic sessions for the system associated with the foster child. These elements of integrative systemic family practice have seen success in the reduction of symptoms and improved overall well-being of foster children and their stakeholders. Accordingly, a model for best practice using this integrative systemic approach is presented for this population group and preliminary findings for this approach over four years of local data have been reviewed.Keywords: systemic family therapy, treating families of children with delays, trauma and attachment in families systems, improving practice and functioning of children and families
Procedia PDF Downloads 14246 Harnessing Renewable Energy as a Strategy to Combating Climate Change in Sub Saharan Africa
Authors: Gideon Nyuimbe Gasu
Abstract:
Sub Saharan Africa is at a critical point, experiencing rapid population growth, particularly in urban areas and young growing force. At the same time, the growing risk of catastrophic global climate change threatens to weaken food production system, increase intensity and frequency of drought, flood, and fires and undermine gains on development and poverty reduction. Although the region has the lowest per capital greenhouse gas emission level in the world, it will need to join global efforts to address climate change, including action to avoid significant increases and to encourage a green economy. Thus, there is a need for the concept of 'greening the economy' as was prescribed at Rio Summit of 1992. Renewable energy is one of the criterions to achieve this laudable goal of maintaining a green economy. There is need to address climate change while facilitating continued economic growth and social progress as energy today is critical to economic growth. Fossil fuels remain the major contributor of greenhouse gas emission. Thus, cleaner technologies such as carbon capture storage, renewable energy have emerged to be commercially competitive. This paper sets out to examine how to achieve a low carbon economy with minimal emission of carbon dioxide and other greenhouse gases which is one of the outcomes of implementing a green economy. Also, the paper examines the different renewable energy sources such as nuclear, wind, hydro, biofuel, and solar voltaic as a panacea to the looming climate change menace. Finally, the paper assesses the different renewable energy and energy efficiency as a propeller to generating new sources of income and jobs and in turn reduces carbon emission. The research shall engage qualitative, evaluative and comparative methods. The research will employ both primary and secondary sources of information. The primary sources of information shall be drawn from the sub Saharan African region and the global environmental organizations, energy legislation, policies and related industries and the judicial processes. The secondary sources will be made up of some books, journal articles, commentaries, discussions, observations, explanations, expositions, suggestions, prescriptions and other material sourced from the internet on renewable energy as a panacea to climate change. All information obtained from these sources will be subject to content analysis. The research result will show that the entire planet is warming as a result of the activities of mankind which is clear evidence that the current development is fundamentally unsustainable. Equally, the study will reveal that a low carbon development pathway in the sub Saharan African region should be embraced to minimize emission of greenhouse gases such as using renewable energy rather than coal, oil, and gas. The study concludes that until adequate strategies are devised towards the use of renewable energy the region will continue to add and worsen the current climate change menace and other adverse environmental conditions.Keywords: carbon dioxide, climate change, legislation/law, renewable energy
Procedia PDF Downloads 227245 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea
Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal
Abstract:
Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism
Procedia PDF Downloads 266244 Influence of Protein Malnutrition and Different Stressful Conditions on Aluminum-Induced Neurotoxicity in Rats: Focus on the Possible Protection Using Epigallocatechin-3-Gallate
Authors: Azza A. Ali, Asmaa Abdelaty, Mona G. Khalil, Mona M. Kamal, Karema Abu-Elfotuh
Abstract:
Background: Aluminium (Al) is known as a neurotoxin environmental pollutant that can cause certain diseases as Dementia, Alzheimer's disease, and Parkinsonism. It is widely used in antacid drugs as well as in food additives and toothpaste. Stresses have been linked to cognitive impairment; Social isolation (SI) may exacerbate memory deficits while protein malnutrition (PM) increases oxidative damage in cortex, hippocampus and cerebellum. The risk of cognitive decline may be lower by maintaining social connections. Epigallocatechin-3-gallate (EGCG) is the most abundant catechin in green tea and has antioxidant, anti-inflammatory and anti-atherogenic effects as well as health-promoting effects in CNS. Objective: To study the influence of different stressful conditions as social isolation, electric shock (EC) and inadequate Nutritional condition as PM on neurotoxicity induced by Al in rats as well as to investigate the possible protective effect of EGCG in these stressful and PM conditions. Methods: Rats were divided into two major groups; protected group which was daily treated during three weeks of the experiment by EGCG (10 mg/kg, IP) or non-treated. Protected and non-protected groups included five subgroups as following: One normal control received saline and four Al toxicity groups injected daily for three weeks by ALCl3 (70 mg/kg, IP). One of them served as Al toxicity model, two groups subjected to different stresses either by isolation as mild stressful condition (SI-associated Al toxicity model) or by electric shock as high stressful condition (EC- associated Al toxicity model). The last was maintained on 10% casein diet (PM -associated Al toxicity model). Isolated rats were housed individually in cages covered with black plastic. Biochemical changes in the brain as acetyl cholinesterase (ACHE), Aβ, brain derived neurotrophic factor (BDNF), inflammatory mediators (TNF-α, IL-1β), oxidative parameters (MDA, SOD, TAC) were estimated for all groups. Histopathological changes in different brain regions were also evaluated. Results: Rats exposed to Al for three weeks showed brain neurotoxicity and neuronal degenerations. Both mild (SI) and high (EC) stressful conditions as well as inadequate nutrition (PM) enhanced Al-induced neurotoxicity and brain neuronal degenerations; the enhancement induced by stresses especially in its higher conditions (ES) was more pronounced than that of inadequate nutritional conditions (PM) as indicated by the significant increase in Aβ, ACHE, MDA, TNF-α, IL-1β together with the significant decrease in SOD, TAC, BDNF. On the other hand, EGCG showed more pronounced protection against hazards of Al in both stressful conditions (SI and EC) rather than in PM .The protective effects of EGCG were indicated by the significant decrease in Aβ, ACHE, MDA, TNF-α, IL-1β together with the increase in SOD, TAC, BDNF and confirmed by brain histopathological examinations. Conclusion: Neurotoxicity and brain neuronal degenerations induced by Al were more severe with stresses than with PM. EGCG can protect against Al-induced brain neuronal degenerations in all conditions. Consequently, administration of EGCG together with socialization as well as adequate protein nutrition is advised especially on excessive Al-exposure to avoid the severity of its neuronal toxicity.Keywords: environmental pollution, aluminum, social isolation, protein malnutrition, neuronal degeneration, epigallocatechin-3-gallate, rats
Procedia PDF Downloads 391243 Evaluation of Redundancy Architectures Based on System on Chip Internal Interfaces for Future Unmanned Aerial Vehicles Flight Control Computer
Authors: Sebastian Hiergeist
Abstract:
It is a common view that Unmanned Aerial Vehicles (UAV) tend to migrate into the civil airspace. This trend is challenging UAV manufacturer in plenty ways, as there come up a lot of new requirements and functional aspects. On the higher application levels, this might be collision detection and avoidance and similar features, whereas all these functions only act as input for the flight control components of the aircraft. The flight control computer (FCC) is the central component when it comes up to ensure a continuous safe flight and landing. As these systems are flight critical, they have to be built up redundantly to be able to provide a Fail-Operational behavior. Recent architectural approaches of FCCs used in UAV systems are often based on very simple microprocessors in combination with proprietary Application-Specific Integrated Circuit (ASIC) or Field Programmable Gate Array (FPGA) extensions implementing the whole redundancy functionality. In the future, such simple microprocessors may not be available anymore as they are more and more replaced by higher sophisticated System on Chip (SoC). As the avionic industry cannot provide enough market power to significantly influence the development of new semiconductor products, the use of solutions from foreign markets is almost inevitable. Products stemming from the industrial market developed according to IEC 61508, or automotive SoCs, according to ISO 26262, can be seen as candidates as they have been developed for similar environments. Current available SoC from the industrial or automotive sector provides quite a broad selection of interfaces like, i.e., Ethernet, SPI or FlexRay, that might come into account for the implementation of a redundancy network. In this context, possible network architectures shall be investigated which could be established by using the interfaces stated above. Of importance here is the avoidance of any single point of failures, as well as a proper segregation in distinct fault containment regions. The performed analysis is supported by the use of guidelines, published by the aviation authorities (FAA and EASA), on the reliability of data networks. The main focus clearly lies on the reachable level of safety, but also other aspects like performance and determinism play an important role and are considered in the research. Due to the further increase in design complexity of recent and future SoCs, also the risk of design errors, which might lead to common mode faults, increases. Thus in the context of this work also the aspect of dissimilarity will be considered to limit the effect of design errors. To achieve this, the work is limited to broadly available interfaces available in products from the most common silicon manufacturer. The resulting work shall support the design of future UAV FCCs by giving a guideline on building up a redundancy network between SoCs, solely using on board interfaces. Therefore the author will provide a detailed usability analysis on available interfaces provided by recent SoC solutions, suggestions on possible redundancy architectures based on these interfaces and an assessment of the most relevant characteristics of the suggested network architectures, like e.g. safety or performance.Keywords: redundancy, System-on-Chip, UAV, flight control computer (FCC)
Procedia PDF Downloads 219242 Changes in Physicochemical Characteristics of a Serpentine Soil and in Root Architecture of a Hyperaccumulating Plant Cropped with a Legume
Authors: Ramez F. Saad, Ahmad Kobaissi, Bernard Amiaud, Julien Ruelle, Emile Benizri
Abstract:
Agromining is a new technology that establishes agricultural systems on ultramafic soils in order to produce valuable metal compounds such as nickel (Ni), with the final aim of restoring a soil's agricultural functions. But ultramafic soils are characterized by low fertility levels and this can limit yields of hyperaccumulators and metal phytoextraction. The objectives of the present work were to test if the association of a hyperaccumulating plant (Alyssum murale) and a Fabaceae (Vicia sativa var. Prontivesa) could induce changes in physicochemical characteristics of a serpentine soil and in root architecture of a hyperaccumulating plant then lead to efficient agromining practices through soil quality improvement. Based on standard agricultural systems, consisting in the association of legumes and another crop such as wheat or rape, a three-month rhizobox experiment was carried out to study the effect of the co-cropping (Co) or rotation (Ro) of a hyperaccumulating plant (Alyssum murale) with a legume (Vicia sativa) and incorporating legume biomass to soil, in comparison with mineral fertilization (FMo), on the structure and physicochemical properties of an ultramafic soil and on root architecture. All parameters measured (biomass, C and N contents, and taken-up Ni) on Alyssum murale conducted in co-cropping system showed the highest values followed by the mineral fertilization and rotation (Co > FMo > Ro), except for root nickel yield for which rotation was better than the mineral fertilization (Ro > FMo). The rhizosphere soil of Alyssum murale in co-cropping had larger soil particles size and better aggregates stability than other treatments. Using geostatistics, co-cropped Alyssum murale showed a greater root surface area spatial distribution. Moreover, co-cropping and rotation-induced lower soil DTPA-extractable nickel concentrations than other treatments, but higher pH values. Alyssum murale co-cropped with a legume showed a higher biomass production, improved soil physical characteristics and enhanced nickel phytoextraction. This study showed that the introduction of a legume into Ni agromining systems could improve yields of dry biomass of the hyperaccumulating plant used and consequently, the yields of Ni. Our strategy can decrease the need to apply fertilizers and thus minimizes the risk of nitrogen leaching and underground water pollution. Co-cropping of Alyssum murale with the legume showed a clear tendency to increase nickel phytoextraction and plant biomass in comparison to rotation treatment and fertilized mono-culture. In addition, co-cropping improved soil physical characteristics and soil structure through larger and more stabilized aggregates. It is, therefore, reasonable to conclude that the use of legumes in Ni-agromining systems could be a good strategy to reduce chemical inputs and to restore soil agricultural functions. Improving the agromining system by the replacement of inorganic fertilizers could simultaneously be a safe way of rehabilitating degraded soils and a method to restore soil quality and functions leading to the recovery of ecosystem services.Keywords: plant association, legumes, hyperaccumulating plants, ultramafic soil physicochemical properties
Procedia PDF Downloads 166241 Water Ingress into Underground Mine Voids in the Central Rand Goldfields Area, South Africa-Fluid Induced Seismicity
Authors: Artur Cichowicz
Abstract:
The last active mine in the Central Rand Goldfields area (50 km x 15 km) ceased operations in 2008. This resulted in the closure of the pumping stations, which previously maintained the underground water level in the mining voids. As a direct consequence of the water being allowed to flood the mine voids, seismic activity has increased directly beneath the populated area of Johannesburg. Monitoring of seismicity in the area has been on-going for over five years using the network of 17 strong ground motion sensors. The objective of the project is to improve strategies for mine closure. The evolution of the seismicity pattern was investigated in detail. Special attention was given to seismic source parameters such as magnitude, scalar seismic moment and static stress drop. Most events are located within historical mine boundaries. The seismicity pattern shows a strong relationship between the presence of the mining void and high levels of seismicity; no seismicity migration patterns were observed outside the areas of old mining. Seven years after the pumping stopped, the evolution of the seismicity has indicated that the area is not yet in equilibrium. The level of seismicity in the area appears to not be decreasing over time since the number of strong events, with Mw magnitudes above 2, is still as high as it was when monitoring began over five years ago. The average rate of seismic deformation is 1.6x1013 Nm/year. Constant seismic deformation was not observed over the last 5 years. The deviation from the average is in the order of 6x10^13 Nm/year, which is a significant deviation. The variation of cumulative seismic moment indicates that a constant deformation rate model is not suitable. Over the most recent five year period, the total cumulative seismic moment released in the Central Rand Basin was 9.0x10^14 Nm. This is equivalent to one earthquake of magnitude 3.9. This is significantly less than what was experienced during the mining operation. Characterization of seismicity triggered by a rising water level in the area can be achieved through the estimation of source parameters. Static stress drop heavily influences ground motion amplitude, which plays an important role in risk assessments of potential seismic hazards in inhabited areas. The observed static stress drop in this study varied from 0.05 MPa to 10 MPa. It was found that large static stress drops could be associated with both small and large events. The temporal evolution of the inter-event time provides an understanding of the physical mechanisms of earthquake interaction. Changes in the characteristics of the inter-event time are produced when a stress change is applied to a group of faults in the region. Results from this study indicate that the fluid-induced source has a shorter inter-event time in comparison to a random distribution. This behaviour corresponds to a clustering of events, in which short recurrence times tend to be close to each other, forming clusters of events.Keywords: inter-event time, fluid induced seismicity, mine closure, spectral parameters of seismic source
Procedia PDF Downloads 285240 Assessing Sustainability of Bike Sharing Projects Using Envision™ Rating System
Authors: Tamar Trop
Abstract:
Bike sharing systems can be important elements of smart cities as they have the potential for impact on multiple levels. These systems can add a significant alternative to other modes of mass transit in cities that are continuously looking for measures to become more livable and maintain their attractiveness for citizens, businesses and tourism. Bike-sharing began in Europe in 1965, and a viable format emerged in the mid-2000s thanks to the introduction of information technology. The rate of growth in bike-sharing schemes and fleets has been very rapid since 2008 and has probably outstripped growth in every other form of urban transport. Today, public bike-sharing systems are available on five continents, including over 700 cities, operating more than 800,000 bicycles at approximately 40,000 docking stations. Since modern bike sharing systems have become prevalent only in the last decade, the existing literature analyzing these systems and their sustainability is relatively new. The purpose of the presented study is to assess the sustainability of these newly emerging transportation systems, by using the Envision™ rating system as a methodological framework and the Israeli 'Tel -O-Fun' – bike sharing project as a case study. The assessment was conducted by project team members. Envision™ is a new guidance and rating system used to assess and improve the sustainability of all types and sizes of infrastructure projects. This tool provides a holistic framework for evaluating and rating the community, environmental, and economic benefits of infrastructure projects over the course of their life cycle. This evaluation method has 60 sustainability criteria divided into five categories: Quality of life, leadership, resource allocation, natural world, and climate and risk. 'Tel -O-Fun' project was launched in Tel Aviv-Yafo on 2011 and today provides about 1,800 bikes for rent, at 180 rental stations across the city. The system is based on a complex computer terminal that is located in the docking stations. The highest-rated sustainable features that the project scored include: (a) Improving quality of life by: offering a low cost and efficient form of public transit, improving community mobility and access, enabling the flexibility of travel within a multimodal transportation system, saving commuters time and money, enhancing public health and reducing air and noise pollution; (b) improving resource allocation by: offering inexpensive and flexible last-mile connectivity, reducing space, materials and energy consumption, reducing wear and tear on public roads, and maximizing the utility of existing infrastructure, and (c) reducing of greenhouse gas emissions from transportation. Overall, 'Tel -O-Fun' project was highly scored as an environmentally sustainable and socially equitable infrastructure. The use of this practical framework for evaluation also yielded various interesting insights on the shortcoming of the system and the characteristics of good solutions. This can contribute to the improvement of the project and may assist planners and operators of bike sharing systems to develop a sustainable, efficient and reliable transportation infrastructure within smart cities.Keywords: bike sharing, Envision™, sustainability rating system, sustainable infrastructure
Procedia PDF Downloads 340239 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing
Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan
Abstract:
This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium
Procedia PDF Downloads 297238 Elements of Creativity and Innovation
Authors: Fadwa Al Bawardi
Abstract:
In March 2021, the Saudi Arabian Council of Ministers issued a decision to form a committee called the "Higher Committee for Research, Development and Innovation," a committee linked to the Council of Economic and Development Affairs, chaired by the Chairman of the Council of Economic and Development Affairs, and concerned with the development of the research, development and innovation sector in the Kingdom. In order to talk about the dimensions of this wonderful step, let us first try to answer the following questions. Is there a difference between creativity and innovation..? What are the factors of creativity in the individual. Are they mental genetic factors or are they factors that an individual acquires through learning..? The methodology included surveys that have been conducted on more than 500 individuals, males and females, between the ages of 18 till 60. And the answer is. "Creativity" is the creation of a new idea, while "Innovation" is the development of an already existing idea in a new, successful way. They are two sides of the same coin, as the "creative idea" needs to be developed and transformed into an "innovation" in order to achieve either strategic achievements at the level of countries and institutions to enhance organizational intelligence, or achievements at the level of individuals. For example, the beginning of smart phones was just a creative idea from IBM in 1994, but the actual successful innovation for the manufacture, development and marketing of these phones was through Apple later. Nor does creativity have to be hereditary. There are three basic factors for creativity: The first factor is "the presence of a challenge or an obstacle" that the individual faces and seeks thinking to find solutions to overcome, even if thinking requires a long time. The second factor is the "environment surrounding" of the individual, which includes science, training, experience gained, the ability to use techniques, as well as the ability to assess whether the idea is feasible or otherwise. To achieve this factor, the individual must be aware of own skills, strengths, hobbies, and aspects in which one can be creative, and the individual must also be self-confident and courageous enough to suggest those new ideas. The third factor is "Experience and the Ability to Accept Risk and Lack of Initial Success," and then learn from mistakes and try again tirelessly. There are some tools and techniques that help the individual to reach creative and innovative ideas, such as: Mind Maps tool, through which the available information is drawn by writing a short word for each piece of information and arranging all other relevant information through clear lines, which helps in logical thinking and correct vision. There is also a tool called "Flow Charts", which are graphics that show the sequence of data and expected results according to an ordered scenario of events and workflow steps, giving clarity to the ideas, their sequence, and what is expected of them. There are also other great tools such as the Six Hats tool, a useful tool to be applied by a group of people for effective planning and detailed logical thinking, and the Snowball tool. And all of them are tools that greatly help in organizing and arranging mental thoughts, and making the right decisions. It is also easy to learn, apply and use all those tools and techniques to reach creative and innovative solutions. The detailed figures and results of the conducted surveys are available upon request, with charts showing the %s based on gender, age groups, and job categories.Keywords: innovation, creativity, factors, tools
Procedia PDF Downloads 55237 Navigating States of Emergency: A Preliminary Comparison of Online Public Reaction to COVID-19 and Monkeypox on Twitter
Authors: Antonia Egli, Theo Lynn, Pierangelo Rosati, Gary Sinclair
Abstract:
The World Health Organization (WHO) defines vaccine hesitancy as the postponement or complete denial of vaccines and estimates a direct linkage to approximately 1.5 million avoidable deaths annually. This figure is not immune to public health developments, as has become evident since the global spread of COVID-19 from Wuhan, China in early 2020. Since then, the proliferation of influential, but oftentimes inaccurate, outdated, incomplete, or false vaccine-related information on social media has impacted hesitancy levels to a degree described by the WHO as an infodemic. The COVID-19 pandemic and related vaccine hesitancy levels have in 2022 resulted in the largest drop in childhood vaccinations of the 21st century, while the prevalence of online stigma towards vaccine hesitant consumers continues to grow. Simultaneously, a second disease has risen to global importance: Monkeypox is an infection originating from west and central Africa and, due to racially motivated online hate, was in August 2022 set to be renamed by the WHO. To better understand public reactions towards two viral infections that became global threats to public health no two years apart, this research examines user replies to threads published by the WHO on Twitter. Replies to two Tweets from the @WHO account declaring COVID-19 and Monkeypox as ‘public health emergencies of international concern’ on January 30, 2020, and July 23, 2022, are gathered using the Twitter application programming interface and user mention timeline endpoint. Research methodology is unique in its analysis of stigmatizing, racist, and hateful content shared on social media within the vaccine discourse over the course of two disease outbreaks. Three distinct analyses are conducted to provide insight into (i) the most prevalent topics and sub-topics among user reactions, (ii) changes in sentiment towards the spread of the two diseases, and (iii) the presence of stigma, racism, and online hate. Findings indicate an increase in hesitancy to accept further vaccines and social distancing measures, the presence of stigmatizing content aimed primarily at anti-vaccine cohorts and racially motivated abusive messages, and a prevalent fatigue towards disease-related news overall. This research provides value to non-profit organizations or government agencies associated with vaccines and vaccination programs in emphasizing the need for public health communication fitted to consumers' vaccine sentiments, levels of health information literacy, and degrees of trust towards public health institutions. Considering the importance of addressing fears among the vaccine hesitant, findings also illustrate the risk of alienation through stigmatization, lead future research in probing the relatively underexamined field of online, vaccine-related stigma, and discuss the potential effects of stigma towards vaccine hesitant Twitter users in their decisions to vaccinate.Keywords: social marketing, social media, public health communication, vaccines
Procedia PDF Downloads 98236 Polarization as a Proxy of Misinformation Spreading
Authors: Michela Del Vicario, Walter Quattrociocchi, Antonio Scala, Ana Lucía Schmidt, Fabiana Zollo
Abstract:
Information, rumors, and debates may shape and impact public opinion heavily. In the latest years, several concerns have been expressed about social influence on the Internet and the outcome that online debates might have on real-world processes. Indeed, on online social networks users tend to select information that is coherent to their system of beliefs and to form groups of like-minded people –i.e., echo chambers– where they reinforce and polarize their opinions. In this way, the potential benefits coming from the exposure to different points of view may be reduced dramatically, and individuals' views may become more and more extreme. Such a context fosters misinformation spreading, which has always represented a socio-political and economic risk. The persistence of unsubstantiated rumors –e.g., the hypothetical and hazardous link between vaccines and autism– suggests that social media do have the power to misinform, manipulate, or control public opinion. As an example, current approaches such as debunking efforts or algorithmic-driven solutions based on the reputation of the source seem to prove ineffective against collective superstition. Indeed, experimental evidence shows that confirmatory information gets accepted even when containing deliberately false claims while dissenting information is mainly ignored, influences users’ emotions negatively and may even increase group polarization. Moreover, confirmation bias has been shown to play a pivotal role in information cascades, posing serious warnings about the efficacy of current debunking efforts. Nevertheless, mitigation strategies have to be adopted. To generalize the problem and to better understand social dynamics behind information spreading, in this work we rely on a tight quantitative analysis to investigate the behavior of more than 300M users w.r.t. news consumption on Facebook over a time span of six years (2010-2015). Through a massive analysis on 920 news outlets pages, we are able to characterize the anatomy of news consumption on a global and international scale. We show that users tend to focus on a limited set of pages (selective exposure) eliciting a sharp and polarized community structure among news outlets. Moreover, we find similar patterns around the Brexit –the British referendum to leave the European Union– debate, where we observe the spontaneous emergence of two well segregated and polarized groups of users around news outlets. Our findings provide interesting insights into the determinants of polarization and the evolution of core narratives on online debating. Our main aim is to understand and map the information space on online social media by identifying non-trivial proxies for the early detection of massive informational cascades. Furthermore, by combining users traces, we are finally able to draft the main concepts and beliefs of the core narrative of an echo chamber and its related perceptions.Keywords: information spreading, misinformation, narratives, online social networks, polarization
Procedia PDF Downloads 291235 Microbial Analysis of Street Vended Ready-to-Eat Meat around Thohoyandou Area, Vhembe District, Limpopo Province, RSA
Authors: Tshimangadzo Jeanette Raedani, Edgar Musie, Afsatou Traore
Abstract:
Background: Street-vended meats, including chicken, pork, and beef, are popular in urban areas worldwide due to their convenience and affordability. However, these meats often pose a significant risk of foodborne diseases. The high water activity, protein content, and nearly neutral pH of meat create conditions conducive to the growth of pathogenic bacteria. Street foods, particularly meats, are frequently linked to outbreaks of foodborne illnesses due to potential contamination from improper handling and preparation. This study aimed to assess the microbial quality and safety of street-vended ready-to-eat meat sold in the Thohoyandou area. Method: The study involved collecting 168 samples of street-vended meat, split evenly between chicken (n=84) and beef (n=84), from various vendors around Thohoyandou. The samples were randomly selected and transported in sterile conditions to the Department of Food Microbiology at the University of Venda for analysis. Each 10-gram sample was cultured in selective media: MSA for Staphylococcus aureus, EMB for E. coli O157, XLD agar for Salmonella, and Sorbitol McConkey for Shigella. After initial culturing, the presumptive colonies were sub-cultured for purification and identified through Gram staining and biochemical tests, including Catalase, API 20E, Klingler Iron Agar Test, and Vitek 2 system. Antibiotic susceptibility was tested using agents such as Ampicillin, Chloramphenicol, Penicillin, Neomycin, Tetracycline, Streptomycin, and Amoxicillin. Molecular characterization was performed to identify E. coli pathotypes using multiplex PCR. Results: Out of 168 samples tested, 32 (19%) were positive for Staphylococcus spp., with the highest prevalence found in cooked chicken meat. The most common staphylococcus species identified were S. xylosus (13.2%) and S. saprophyticus (10.5%). E. coli was present in 29 (19.3%) of the samples, with the highest prevalence in fried chicken. Antibiotic susceptibility testing showed that 100% of E. coli isolates were resistant to Ampicillin, Tetracycline, and Penicillin, but 100% were susceptible to Neomycin. Staphylococcus spp. isolates were also 100% resistant to Ampicillin and 100% susceptible to Neomycin. The study detected a range of virulence genes in E. coli, with prevalence rates from 13.33% to 86.67%. The identified pathotypes included EPEC, EHEC, ETEC, EAEC, and EIEC, with many isolates showing mixed pathotypes. Conclusion: The study highlighted that the microbial quality and safety of street-vended meats in Thohoyandou are inadequate, rendering them unsafe for consumption. The presence of pathogenic microorganisms in both beef and chicken samples indicates significant risks associated with poor personal hygiene and food preparation practices. This underscores the need for improved monitoring and stricter food safety measures to prevent foodborne diseases and ensure consumer safety.Keywords: meat, microbial analysis, street vendors, E. coli
Procedia PDF Downloads 27234 The Employment of Unmanned Aircraft Systems for Identification and Classification of Helicopter Landing Zones and Airdrop Zones in Calamity Situations
Authors: Marielcio Lacerda, Angelo Paulino, Elcio Shiguemori, Alvaro Damiao, Lamartine Guimaraes, Camila Anjos
Abstract:
Accurate information about the terrain is extremely important in disaster management activities or conflict. This paper proposes the use of the Unmanned Aircraft Systems (UAS) at the identification of Airdrop Zones (AZs) and Helicopter Landing Zones (HLZs). In this paper we consider the AZs the zones where troops or supplies are dropped by parachute, and HLZs areas where victims can be rescued. The use of digital image processing enables the automatic generation of an orthorectified mosaic and an actual Digital Surface Model (DSM). This methodology allows obtaining this fundamental information to the terrain’s comprehension post-disaster in a short amount of time and with good accuracy. In order to get the identification and classification of AZs and HLZs images from DJI drone, model Phantom 4 have been used. The images were obtained with the knowledge and authorization of the responsible sectors and were duly registered in the control agencies. The flight was performed on May 24, 2017, and approximately 1,300 images were obtained during approximately 1 hour of flight. Afterward, new attributes were generated by Feature Extraction (FE) from the original images. The use of multispectral images and complementary attributes generated independently from them increases the accuracy of classification. The attributes of this work include the Declivity Map and Principal Component Analysis (PCA). For the classification four distinct classes were considered: HLZ 1 – small size (18m x 18m); HLZ 2 – medium size (23m x 23m); HLZ 3 – large size (28m x 28m); AZ (100m x 100m). The Decision Tree method Random Forest (RF) was used in this work. RF is a classification method that uses a large collection of de-correlated decision trees. Different random sets of samples are used as sampled objects. The results of classification from each tree and for each object is called a class vote. The resulting classification is decided by a majority of class votes. In this case, we used 200 trees for the execution of RF in the software WEKA 3.8. The classification result was visualized on QGIS Desktop 2.12.3. Through the methodology used, it was possible to classify in the study area: 6 areas as HLZ 1, 6 areas as HLZ 2, 4 areas as HLZ 3; and 2 areas as AZ. It should be noted that an area classified as AZ covers the classifications of the other classes, and may be used as AZ, HLZ of large size (HLZ3), medium size (HLZ2) and small size helicopters (HLZ1). Likewise, an area classified as HLZ for large rotary wing aircraft (HLZ3) covers the smaller area classifications, and so on. It was concluded that images obtained through small UAV are of great use in calamity situations since they can provide data with high accuracy, with low cost, low risk and ease and agility in obtaining aerial photographs. This allows the generation, in a short time, of information about the features of the terrain in order to serve as an important decision support tool.Keywords: disaster management, unmanned aircraft systems, helicopter landing zones, airdrop zones, random forest
Procedia PDF Downloads 177233 Family Firm Internationalization: Identification of Alternative Success Pathways
Authors: Sascha Kraus, Wolfgang Hora, Philipp Stieg, Thomas Niemand, Ferdinand Thies, Matthias Filser
Abstract:
In most countries, small and medium-sized enterprises (SME) are the backbone of the economy due to their impact on job creation, innovation and wealth creation. Moreover, the ongoing globalization makes it inevitable – even for SME that traditionally focused on their domestic markets – to internationalize their business activities to realize further growth and survive in international markets. Thus, internationalization has become one of the most common growth strategies for SME and has received increasing scholarly attention over the last two decades. One the downside internationalization can be also regarded as the most complex strategy that a firm can undertake. Particularly for family firms, that are often characterized by limited financial capital, a risk-averse nature and limited growth aspirations, it could be argued that family firms are more likely to face greater challenges when taking the pathway to internationalization. Especially the triangulation of family, ownership, and management (so-called ‘familiness’) manifests in a unique behavior and decision-making process which is often characterized by the importance given to noneconomic goals and distinguishes a family firm from other businesses. Taking this into account, the concept of socio-emotional wealth (SEW) has been evolved to describe the behavior of family firms. In order to investigate how different internal and external firm characteristics shape internationalization success of family firms, we drew on a sample consisting of 297 small and medium-sized family firms from Germany, Austria, Switzerland, and Liechtenstein. Thus, we include SEW as essential family firm characteristic and added the two major intra-organizational characteristics, entrepreneurial orientation (EO), absorptive capacity (AC) as well as collaboration intensity (CI) and relational knowledge (RK) as two major external network characteristics. Based on previous research we assume that these characteristics are important to explain internationalization success of family firm SME. Regarding the data analysis, we applied a Fuzzy Set Qualitative Comparative Analysis (fsQCA), an approach that allows identifying configurations of firm characteristics, specifically used to study complex causal relationships where traditional regression techniques reach their limits. Results indicate that several combinations of these family firm characteristics can lead to international success, with no permanently required key characteristic. Instead, there are many roads to walk down for family firms to achieve internationalization success. Consequently, our data states that family owned SME are heterogeneous and internationalization is a complex and dynamic process. Results further show that network related characteristics occur in all sets, thus represent an essential element in the internationalization process of family owned SME. The contribution of our study is twofold, as we investigate different forms of international expansion for family firms and how to improve them. First, we are able to broaden the understanding of the intersection between family firm and SME internationalization with respect to major intra-organizational and network-related variables. Second, from a practical perspective, we offer family firm owners a basis for setting up internal capabilities to achieve international success.Keywords: entrepreneurial orientation, family firm, fsQCA, internationalization, socio-emotional wealth
Procedia PDF Downloads 241232 Strategies for the Optimization of Ground Resistance in Large Scale Foundations for Optimum Lightning Protection
Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda
Abstract:
In this paper, we discuss the standard improvements which can be made to reduce the earth resistance in difficult terrains for optimum lightning protection, what are the practical limitations, and how the modeling can be refined for accurate diagnostics and ground resistance minimization. Ground resistance minimization can be made via three different approaches: burying vertical electrodes connected in parallel, burying horizontal conductive plates or meshes, or modifying the own terrain, either by changing the entire terrain material in a large volume or by adding earth-enhancing compounds. The use of vertical electrodes connected in parallel pose several practical limitations. In order to prevent loss of effectiveness, it is necessary to keep a minimum distance between each electrode, which is typically around five times larger than the electrode length. Otherwise, the overlapping of the local equipotential lines around each electrode reduces the efficiency of the configuration. The addition of parallel electrodes reduces the resistance and facilitates the measurement, but the basic parallel resistor formula of circuit theory will always underestimate the final resistance. Numerical simulation of equipotential lines around the electrodes overcomes this limitation. The resistance of a single electrode will always be proportional to the soil resistivity. The electrodes are usually installed with a backfilling material of high conductivity, which increases the effective diameter. However, the improvement is marginal, since the electrode diameter counts in the estimation of the ground resistance via a logarithmic function. Substances that are used for efficient chemical treatment must be environmentally friendly and must feature stability, high hygroscopicity, low corrosivity, and high electrical conductivity. A number of earth enhancement materials are commercially available. Many are comprised of carbon-based materials or clays like bentonite. These materials can also be used as backfilling materials to reduce the resistance of an electrode. Chemical treatment of soil has environmental issues. Some products contain copper sulfate or other copper-based compounds, which may not be environmentally friendly. Carbon-based compounds are relatively inexpensive and they do have very low resistivities, but they also feature corrosion issues. Typically, the carbon can corrode and destroy a copper electrode in around five years. These compounds also have potential environmental concerns. Some earthing enhancement materials contain cement, which, after installation acquire properties that are very close to concrete. This prevents the earthing enhancement material from leaching into the soil. After analyzing different configurations, we conclude that a buried conductive ring with vertical electrodes connected periodically should be the optimum baseline solution for the grounding of a large size structure installed on a large resistivity terrain. In order to show this, a practical example is explained here where we simulate the ground resistance of a conductive ring buried in a terrain with a resistivity in the range of 1 kOhm·m.Keywords: grounding improvements, large scale scientific instrument, lightning risk assessment, lightning standards
Procedia PDF Downloads 139231 Public-Private Partnership for Better Protection of Trafficked Victims in Thailand: Case Study on Public Protection and Welfare Center in Cooperation with Jim Thompson Foundation in Occupational Development on Silk Sewing and Tailoring
Authors: Aungkana Kmonpetch
Abstract:
Protection of trafficked victims and partnership among stakeholders are established as core principles in 5P’ strategies in international and national anti-human trafficking policies. In this article, it is of interest to discuss how the role of public-private partnerships in promoting the occupation development for employment in wage will enhance the better protection for victims of trafficking who affirmatively decide they want a criminal justice intervention, using Thailand as a case. Most of the victims who have accepted to be witness in the criminal justice system have lost income during their absence from work. The analysis of Thailand case is based on two methodological approaches: 1) interview with victims of trafficking, protection authorities, service providers, trainers and teachers, social workers, NGOs, police, prosecutors, business owners and enterprises, ILO, UNDP etc.; 2) create collaborative effort through workshops/consultation meetings in participation of all stakeholders – governmental agencies, private organizations, UN and international agencies. The linking of protection and partnership is anchored in international conventions and human trafficking directives. While this is actually framed as a responsive advantage for 5P strategies of anti-human trafficking – prevention, protection, persecution, punishment, and partnership, in reality, there might have more practical requirements of care and support. The article addresses how the partnership between governmental agencies and private organizations provide opportunities for trafficked victims to engage in high-skilled occupational development such as Silk-Sewing and Tailoring. The discussion is also focused how this approach of capacity building of the trainer for trainee, be enable the trafficked victims to cultivate the practices of high-skilled training to engage them into the business of social enterprise with employment in wage. The partnership coordination draws specifically to two aspects: firstly, to formulate appropriate assistance for promotion and protection of human rights of the trafficked victims in response to the 5P’ strategies of anti-human trafficking policy; secondly, to empower them to settle some economic stability for livelihood opportunity in the country of origin on their return and reintegration. Therefore, they can define how they want to move forward to prevent them at risk of vulnerable situations where they might being trafficked again or going on to work in exploitative conditions. It strengthens proper access to protection and assistance, depending on how the incentive of protection for cooperation is perceived to be and how useful the capacity building in occupation development for employment in wage will be implemented practically both in the host country and in the country of origin. This also brings into question how the victim of trafficking are able to access to the trade of market and are supported the employment opportunity according to the concept of decent work as they are constituted as witnesses. We discuss these issues in the area of a broader literature on social protection, economic security, gender, law, and victimhood.Keywords: employment opportunity, occupation development, protection for victim of trafficking, public-private partnership
Procedia PDF Downloads 228