Search results for: loss probability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4538

Search results for: loss probability

1358 Failure of Agriculture Soil following the Passage of Tractors

Authors: Anis Eloud, Sayed Chehaibi

Abstract:

Compaction of agricultural soils as a result of the passage of heavy machinery on the fields is a problem that affects many agronomists and farmers since it results in a loss of yield of most crops. To remedy this, and raise the overall future of the food security challenge, we must study and understand the process of soil degradation. The present review is devoted to understanding the effect of repeated passages on agricultural land. The experiments were performed on a plot of the area of the ESIER, characterized by a clay texture in order to quantify the soil compaction caused by the wheels of the tractor during repeated passages on agricultural land. The test tractor CASE type puissance 110 hp and 5470 kg total mass of 3500 kg including the two rear axles and 1970 kg on the front axle. The state of soil compaction has been characterized by measuring its resistance to penetration by means of a penetrometer and direct manual reading, the density and permeability of the soil. Soil moisture was taken jointly. The measurements are made in the initial state before passing the tractor and after each pass varies from 1 to 7 on the track wheel inflated to 1.5 bar for the rear wheel and broke water to the level of valve and 4 bar for the front wheels. The passages are spaced to the average of one week. The results show that the passage of wheels on a farm tilled soil leads to compaction and the latter increases with the number of passages, especially for the upper 15 cm depth horizons. The first passage is characterized by the greatest effect. However, the effect of other passages do not follow a definite law for the complex behavior of granular media and the history of labor and the constraints it suffers from its formation.

Keywords: wheel traffic, tractor, soil compaction, wheel

Procedia PDF Downloads 476
1357 High Titer Cellulosic Ethanol Production Achieved by Fed-Batch Prehydrolysis Simultaneous Enzymatic Saccharification and Fermentation of Sulfite Pretreated Softwood

Authors: Chengyu Dong, Shao-Yuan Leu

Abstract:

Cellulosic ethanol production from lignocellulosic biomass can reduce our reliance on fossil fuel, mitigate climate change, and stimulate rural economic development. The relative low ethanol production (60 g/L) limits the economic viable of lignocellulose-based biorefinery. The ethanol production can be increased up to 80 g/L by removing nearly all the non-cellulosic materials, while the capital of the pretreatment process increased significantly. In this study, a fed-batch prehydrolysis simultaneously saccharification and fermentation process (PSSF) was designed to converse the sulfite pretreated softwood (~30% residual lignin) to high concentrations of ethanol (80 g/L). The liquefaction time of hydrolysis process was shortened down to 24 h by employing the fed-batch strategy. Washing out the spent liquor with water could eliminate the inhibition of the pretreatment spent liquor. However, the ethanol yield of lignocellulose was reduced as the fermentable sugars were also lost during the process. Fed-batch prehydrolyzing the while slurry (i.e. liquid plus solid fraction) pretreated softwood for 24 h followed by simultaneously saccharification and fermentation process at 28 °C can generate 80 g/L ethanol production. Fed-batch strategy is very effectively to eliminate the “solid effect” of the high gravity saccharification, so concentrating the cellulose to nearly 90% by the pretreatment process is not a necessary step to get high ethanol production. Detoxification of the pretreatment spent liquor caused the loss of sugar and reduced the ethanol yield consequently. The tolerance of yeast to inhibitors was better at 28 °C, therefore, reducing the temperature of the following fermentation process is a simple and valid method to produce high ethanol production.

Keywords: cellulosic ethanol, sulfite pretreatment, Fed batch PSSF, temperature

Procedia PDF Downloads 362
1356 A West Coast Estuarine Case Study: A Predictive Approach to Monitor Estuarine Eutrophication

Authors: Vedant Janapaty

Abstract:

Estuaries are wetlands where fresh water from streams mixes with salt water from the sea. Also known as “kidneys of our planet”- they are extremely productive environments that filter pollutants, absorb floods from sea level rise, and shelter a unique ecosystem. However, eutrophication and loss of native species are ailing our wetlands. There is a lack of uniform data collection and sparse research on correlations between satellite data and in situ measurements. Remote sensing (RS) has shown great promise in environmental monitoring. This project attempts to use satellite data and correlate metrics with in situ observations collected at five estuaries. Images for satellite data were processed to calculate 7 bands (SIs) using Python. Average SI values were calculated per month for 23 years. Publicly available data from 6 sites at ELK was used to obtain 10 parameters (OPs). Average OP values were calculated per month for 23 years. Linear correlations between the 7 SIs and 10 OPs were made and found to be inadequate (correlation = 1 to 64%). Fourier transform analysis on 7 SIs was performed. Dominant frequencies and amplitudes were extracted for 7 SIs, and a machine learning(ML) model was trained, validated, and tested for 10 OPs. Better correlations were observed between SIs and OPs, with certain time delays (0, 3, 4, 6 month delay), and ML was again performed. The OPs saw improved R² values in the range of 0.2 to 0.93. This approach can be used to get periodic analyses of overall wetland health with satellite indices. It proves that remote sensing can be used to develop correlations with critical parameters that measure eutrophication in situ data and can be used by practitioners to easily monitor wetland health.

Keywords: estuary, remote sensing, machine learning, Fourier transform

Procedia PDF Downloads 94
1355 Causes and Impacts of Rework Costs in Construction Projects

Authors: Muhammad Ejaz1

Abstract:

Rework has been defined as: "The unnecessary effort of re-doing a process or activity that was incorrectly implemented the first time." A great threat to the construction industry is rework. By and large due attention has not been given to avoid the causes of reworks, resulting time and cost over runs, in civil engineering projects. Besides these direct consequences, there might also be indirect consequences, such as stress, de-motivation or loss of future clients. When delivered products do not meet the requirements or expectations, work often has to be redone. Rework occurs in various phases of the construction process or in various divisions of a company. Rework can occur on the construction site or in a management department due to for example bad materials management. Rework can also have internal or external origins. Changes in clients’ expectations are an example of an external factor that might lead to rework. Rework can cause many costs to be higher than calculated at the start of the project. Rework events can have many different origins and for this research they have been categorized into four categories; changes, errors, omissions, and damages. The research showed that the major source of reworks were non professional attitude from technical hands and ignorance of total quality management principals by stakeholders. It also revealed that sources of reworks have not major differences among project categories. The causes were further analyzed by interviewing employees. Based on existing literature an extensive list of rework causes was made and during the interviews the interviewees were asked to confirm or deny statements regarding rework causes. The causes that were most frequently confirmed can be grouped into the understanding categories. 56% (max) of the causes are change-related, 30% (max) is error-related and 18% (max) falls into another category. Therefore, by recognizing above mentioned factors, reworks can be reduced to a great extent.

Keywords: total quality management, construction industry, cost overruns, rework, material management, client’s expectations

Procedia PDF Downloads 288
1354 Opto-Electronic Properties and Structural Phase Transition of Filled-Tetrahedral NaZnAs

Authors: R. Khenata, T. Djied, R. Ahmed, H. Baltache, S. Bin-Omran, A. Bouhemadou

Abstract:

We predict structural, phase transition as well as opto-electronic properties of the filled-tetrahedral (Nowotny-Juza) NaZnAs compound in this study. Calculations are carried out by employing the full potential (FP) linearized augmented plane wave (LAPW) plus local orbitals (lo) scheme developed within the structure of density functional theory (DFT). Exchange-correlation energy/potential (EXC/VXC) functional is treated using Perdew-Burke and Ernzerhof (PBE) parameterization for generalized gradient approximation (GGA). In addition to Trans-Blaha (TB) modified Becke-Johnson (mBJ) potential is incorporated to get better precision for optoelectronic properties. Geometry optimization is carried out to obtain the reliable results of the total energy as well as other structural parameters for each phase of NaZnAs compound. Order of the structural transitions as a function of pressure is found as: Cu2Sb type → β → α phase in our study. Our calculated electronic energy band structures for all structural phases at the level of PBE-GGA as well as mBJ potential point out; NaZnAs compound is a direct (Γ–Γ) band gap semiconductor material. However, as compared to PBE-GGA, mBJ potential approximation reproduces higher values of fundamental band gap. Regarding the optical properties, calculations of real and imaginary parts of the dielectric function, refractive index, reflectivity coefficient, absorption coefficient and energy loss-function spectra are performed over a photon energy ranging from 0.0 to 30.0 eV by polarizing incident radiation in parallel to both [100] and [001] crystalline directions.

Keywords: NaZnAs, FP-LAPW+lo, structural properties, phase transition, electronic band-structure, optical properties

Procedia PDF Downloads 426
1353 Effects of High-Protein, Low-Energy Diet on Body Composition in Overweight and Obese Adults: A Clinical Trial

Authors: Makan Cheraghpour, Seyed Ahmad Hosseini, Damoon Ashtary-Larky, Saeed Shirali, Matin Ghanavati, Meysam Alipour

Abstract:

Background: In addition to reducing body weight, the low-calorie diets can reduce the lean body mass. It is hypothesized that in addition to reducing the body weight, the low-calorie diets can maintain the lean body mass. So, the current study aimed at evaluating the effects of high-protein diet with calorie restriction on body composition in overweight and obese individuals. Methods: 36 obese and overweight subjects were divided randomly into two groups. The first group received a normal-protein, low-energy diet (RDA), and the second group received a high-protein, low-energy diet (2×RDA). The anthropometric indices including height, weight, body mass index, body fat mass, fat free mass, and body fat percentage were evaluated before and after the study. Results: A significant reduction was observed in anthropometric indices in both groups (high-protein, low-energy diets and normal-protein, low-energy diets). In addition, more reduction in fat free mass was observed in the normal-protein, low-energy diet group compared to the high -protein, low-energy diet group. In other the anthropometric indices, significant differences were not observed between the two groups. Conclusion: Independently of the type of diet, low-calorie diet can improve the anthropometric indices, but during a weight loss, high-protein diet can help the fat free mass to be maintained.

Keywords: diet, high-protein, body mass index, body fat percentage

Procedia PDF Downloads 300
1352 Identifying Issues of Corporate Governance and the Effect on Organizational Performance

Authors: Abiodun Oluwaseun Ibude

Abstract:

Every now and then we hear of companies closing down their operations due to unethical practices like an overstatement of company’s balance sheet, concealing company’s debt, embezzlement of company’s fund, declaring false profit and so on. This has led to the liquidation of companies and the loss of investments of shareholders as well as the interest of other stakeholders. As a result of these ugly trends, there is need to put in place a formidable mechanism that will ensure that business activities are conducted in a healthy manner. It should also promote good ethics as well as ensure that the interest of stakeholders and the objectives of any organization is achieved within the confines of the law; wherein law exists to provide criminal penalties for falsification of documents and for conducting other irregularities. Based on the foregoing, it becomes imperative to ensure that steps are taken to stop this menace and face the challenges ahead. This calls for the practice of good governance. The purpose of this study is to identify various components of corporate governance and determine the impact of it on the performance of established organizations. A survey method with the use of questionnaire was applied in collecting data useful for this study which were later analyzed using correlation co-efficiency statistical tools in generating finding, making a conclusion, and necessary recommendation. From the research conducted, it was discovered that there are systems within organizations apart from regulatory agencies that ensure effective control of activities, promote accountability, and operational efficiency. However, some members of organizations fail to explore the usage of corporate governance and impact negatively of an organization’s performance. In conclusion, good corporate governance will not be achieved unless there is openness, honesty, transparency, accountability, and fairness.

Keywords: corporate governance, formidable mechanism, company’s balance sheet, stakeholders

Procedia PDF Downloads 110
1351 Network Pharmacological Evaluation of Holy Basil Bioactive Phytochemicals for Identifying Novel Potential Inhibitors Against Neurodegenerative Disorder

Authors: Bhuvanesh Baniya

Abstract:

Alzheimer disease is illnesses that are responsible for neuronal cell death and resulting in lifelong cognitive problems. Due to their unclear mechanism, there are no effective drugs available for the treatment. For a long time, herbal drugs have been used as a role model in the field of the drug discovery process. Holy basil in the Indian medicinal system (Ayurveda) is used for several neuronal disorders like insomnia and memory loss for decades. This study aims to identify active components of holy basil as potential inhibitors for the treatment of Alzheimer disease. To fulfill this objective, the Network pharmacology approach, gene ontology, pharmacokinetics analysis, molecular docking, and molecular dynamics simulation (MDS) studies were performed. A total of 7 active components in holy basil, 12 predicted neurodegenerative targets of holy basil, and 8063 Alzheimer-related targets were identified from different databases. The network analysis showed that the top ten targets APP, EGFR, MAPK1, ESR1, HSPA4, PRKCD, MAPK3, ABL1, JUN, and GSK3B were found as significant target related to Alzheimer disease. On the basis of gene ontology and topology analysis results, APP was found as a significant target related to Alzheimer’s disease pathways. Further, the molecular docking results to found that various compounds showed the best binding affinities. Further, MDS top results suggested could be used as potential inhibitors against APP protein and could be useful for the treatment of Alzheimer’s disease.

Keywords: holy basil, network pharmacology, neurodegeneration, active phytochemicals, molecular docking and simulation

Procedia PDF Downloads 94
1350 Rapid Classification of Soft Rot Enterobacteriaceae Phyto-Pathogens Pectobacterium and Dickeya Spp. Using Infrared Spectroscopy and Machine Learning

Authors: George Abu-Aqil, Leah Tsror, Elad Shufan, Shaul Mordechai, Mahmoud Huleihel, Ahmad Salman

Abstract:

Pectobacterium and Dickeya spp which negatively affect a wide range of crops are the main causes of the aggressive diseases of agricultural crops. These aggressive diseases are responsible for a huge economic loss in agriculture including a severe decrease in the quality of the stored vegetables and fruits. Therefore, it is important to detect these pathogenic bacteria at their early stages of infection to control their spread and consequently reduce the economic losses. In addition, early detection is vital for producing non-infected propagative material for future generations. The currently used molecular techniques for the identification of these bacteria at the strain level are expensive and laborious. Other techniques require a long time of ~48 h for detection. Thus, there is a clear need for rapid, non-expensive, accurate and reliable techniques for early detection of these bacteria. In this study, infrared spectroscopy, which is a well-known technique with all its features, was used for rapid detection of Pectobacterium and Dickeya spp. at the strain level. The bacteria were isolated from potato plants and tubers with soft rot symptoms and measured by infrared spectroscopy. The obtained spectra were analyzed using different machine learning algorithms. The performances of our approach for taxonomic classification among the bacterial samples were evaluated in terms of success rates. The success rates for the correct classification of the genus, species and strain levels were ~100%, 95.2% and 92.6% respectively.

Keywords: soft rot enterobacteriaceae (SRE), pectobacterium, dickeya, plant infections, potato, solanum tuberosum, infrared spectroscopy, machine learning

Procedia PDF Downloads 94
1349 Partial M-Sequence Code Families Applied in Spectral Amplitude Coding Fiber-Optic Code-Division Multiple-Access Networks

Authors: Shin-Pin Tseng

Abstract:

Nowadays, numerous spectral amplitude coding (SAC) fiber-optic code-division-multiple-access (FO-CDMA) techniques were appealing due to their capable of providing moderate security and relieving the effects of multiuser interference (MUI). Nonetheless, the performance of the previous network is degraded due to fixed in-phase cross-correlation (IPCC) value. Based on the above problems, a new SAC FO-CDMA network using partial M-sequence (PMS) code is presented in this study. Because the proposed PMS code is originated from M-sequence code, the system using the PMS code could effectively suppress the effects of MUI. In addition, two-code keying (TCK) scheme can applied in the proposed SAC FO-CDMA network and enhance the whole network performance. According to the consideration of system flexibility, simple optical encoders/decoders (codecs) using fiber Bragg gratings (FBGs) were also developed. First, we constructed a diagram of the SAC FO-CDMA network, including (N/2-1) optical transmitters, (N/2-1) optical receivers, and one N×N star coupler for broadcasting transmitted optical signals to arrive at the input port of each optical receiver. Note that the parameter N for the PMS code was the code length. In addition, the proposed SAC network was using superluminescent diodes (SLDs) as light sources, which then can save a lot of system cost compared with the other FO-CDMA methods. For the design of each optical transmitter, it is composed of an SLD, one optical switch, and two optical encoders according to assigned PMS codewords. On the other hand, each optical receivers includes a 1 × 2 splitter, two optical decoders, and one balanced photodiode for mitigating the effect of MUI. In order to simplify the next analysis, the some assumptions were used. First, the unipolarized SLD has flat power spectral density (PSD). Second, the received optical power at the input port of each optical receiver is the same. Third, all photodiodes in the proposed network have the same electrical properties. Fourth, transmitting '1' and '0' has an equal probability. Subsequently, by taking the factors of phase‐induced intensity noise (PIIN) and thermal noise, the corresponding performance was displayed and compared with the performance of the previous SAC FO-CDMA networks. From the numerical result, it shows that the proposed network improved about 25% performance than that using other codes at BER=10-9. This is because the effect of PIIN was effectively mitigated and the received power was enhanced by two times. As a result, the SAC FO-CDMA network using PMS codes has an opportunity to apply in applications of the next-generation optical network.

Keywords: spectral amplitude coding, SAC, fiber-optic code-division multiple-access, FO-CDMA, partial M-sequence, PMS code, fiber Bragg grating, FBG

Procedia PDF Downloads 381
1348 The Impact of Intimate Partner Violence on Women’s Mental Health in Kenya

Authors: Josephine Muchiri, Makena Muriithi

Abstract:

Adverse mental health consequences are experienced by those that have been touched by Intimate Partner Violence (IPV), whether directly or indirectly. These negative effects are felt not only in the short term but in years to come. It is important to examine the prevalence and co-occurrence of mental disorders in order to provide strategic interventions for women who have experienced IPV. The aim of this study was to examine the prevalence and comorbidity of post-traumatic stress disorder (PTSD), Depression, and Anxiety among women who had experienced intimate Partner violence in two selected informal settlements in Nairobi County, Kenya. Participants were 116 women (15-60 years) selected through purposive and snowball sampling from the low social, economic settlements (Kawangware and Kibera) in Nairobi, Kenya. A social demographic questionnaire and the Woman Abuse Screening Tool (WAST) were used to collect data on intimate partner violence experiences. The PTSD Checklist for DSM-5 (PCL-5), Beck’s Depression Inventory, and the Beck’s Anxiety Inventory assessed for post-traumatic stress disorder, depression, and anxiety, respectively. Data analysis was conducted using the Statistical Package for Social Sciences (SPSS) version 29, utilizing descriptive and correlation analyses. Findings indicated that the women had undergone various forms of abuse from their intimate partners, which were physical abuse 111(92.5%), sexual abuse 70(88.6%), and verbal abuse 92(93.9%). The prevalence of the mental disorders was PTSD 47(32.4%); M= 44.11, S.D =14.67, depression was the highest at n=131(90.3%; M=33.37±9.98) with the levels of depression having varying prevalence rates where severe depression had the highest representation [moderate: n= 35; 24.1%, severe: n=69 (47.6%) and extremely severe: n=27(18.6%)]. Anxiety had the second highest prevalence of n=99 (68.8%; M= 28.55±13.63) with differing prevalence rates in the levels of anxiety which were normal anxiety: 45(31.3%), moderate anxiety n=62(43.1%) and severe anxiety: n=37(25.7%). Regarding comorbidities, the Pearson correlation test showed that there was a significant (p=0.000) positive relationship between PTSD and depression (r=0.379; p=.000), PTSD and anxiety (r=0.624; p=.000), and depression and anxiety (r=0.386; p=.000) such that increase in one disorder concomitantly led to increase of the other two disorders; hence comorbidity of the three disorders was ascertained. Conclusion: The study asserted the adverse impacts of IPV on women’s mental well-being, where the prevalence of PTSD, depression, and anxiety was established. Almost all the women had depressive symptoms; whereas more than half had anxiety and slightly more than a third had PTSD. Regarding the severity levels of anxiety and depression, almost half of the women with depression had severe depression whereas moderate anxiety was more prevalent for those with anxiety. The three disorders were found to co-occur where comorbidities of PTSD and anxiety had the highest probability of co-occurrence. It is thus recommended that mental health interventions with a focus on the three disorders be offered for women undergoing IPV.

Keywords: anxiety, comorbidity, depression, intimate partner violence, post-traumatic stress disorder

Procedia PDF Downloads 72
1347 The Effects of Changes in Accounting Standards on Loan Loss Provisions (LLP) as Earnings Management Device: Evidence from Malaysia and Nigeria Banks (Part I)

Authors: Ugbede Onalo, Mohd Lizam, Ahmad Kaseri

Abstract:

In view of dearth of studies on changes in accounting standards and banks’ earnings management particularly in the context of emerging economies, and the recent Malaysia and Nigeria change from their respective local GAAP to IFRS, this study deemed it overwhelming to investigate the effects of the switch on banks’ earnings management focusing on LLP as the manipulative device. This study employed judgmental sampling to select twenty eight banks- eight Malaysia and twenty Nigeria banks as sample covering period 2008-2013. To provide an empirical research setting in pursuant of the objective of this study, the study period is further partitioned into pre (2008, 2009, 2010) and post (2011, 2012, 2013) IFRS adoption periods. This study consistent with previous studies models a LLP regression model to investigate specific discretionary accruals of banks. Findings suggest that Malaysia and Nigeria banks individually use LLP to manage reported earnings more prior to IFRS implementation. Comparative overall results evidenced that the pre IFRS adoption or domestic GAAP era for both Malaysia and Nigeria sample banks is associated with higher prevalent earnings management through LLP than the corresponding post IFRS adoption era in diverse magnitude but in favour of Malaysia banks for both periods. With results demonstrating that IFRS adoption is linked to lower earnings management via LLP, this study therefore recommends the global adoption of IFRS as reporting framework. This study also endorses that Nigeria banks embrace and borrow a leaf from Malaysia banks good corporate governance practices.

Keywords: accounting standards, IFRS, FRS, SAS, LLP, earnings management

Procedia PDF Downloads 394
1346 Implications of Fulani Herders/Farmers Conflict on the Socio-Economic Development of Nigeria (2000-2018)

Authors: Larry E. Udu, Joseph N. Edeh

Abstract:

Unarguably, the land is an indispensable factor of production and has been instrumental to numerous conflicts between crop farmers and herders in Nigeria. The conflicts pose a grave challenge to life and property, food security and ultimately to sustainable socio-economic development of the nation. The paper examines the causes of the Fulani herders/farmers conflicts, particularly in the Middle Belt; numerity of occurrences and extent of damage and their socio-economic implications. Content Analytical Approach was adopted as methodology wherein data was extensively drawn from the secondary source. Findings reveal that major causes of the conflict are attributable to violation of tradition and laws, trespass and cultural factors. Consequently, the numerity of attacks and level of fatality coupled with displacement of farmers, destruction of private and public facilities impacted negatively on farmers output with their attendant socio-economic implications on sustainable livelihood of the people and the nation at large. For instance, Mercy Corps (a Global Humanitarian Organization) in its research, 2013-2016 asserts that a loss of $14billion within 3 years was incurred and if the conflict were resolved, the average affected household could see increase income by at least 64 percent and potentially 210 percent or higher and that states affected by the conflicts lost an average of 47 percent taxes/IGR. The paper therefore recommends strict adherence to grazing laws; platform for dialogue bothering on compromises where necessary and encouragement of cattle farmers to build ranches for their cattle according to international standards.

Keywords: conflict, farmers, herders, Nigeria, socio-economic implications

Procedia PDF Downloads 199
1345 Composition and Distribution of Seabed Marine Litter Along Algerian Coast (Western Mediterranean)

Authors: Ahmed Inal, Samir Rouidi, Samir Bachouche

Abstract:

The present study is focused on the distribution and composition of seafloor marine litter associated to trawlable fishing areas along Algerian coast. The sampling was done with a GOC73 bottom trawl during four (04) demersal resource assessment cruises, respectively, in 2016, 2019, 2021 and 2022, carried out on board BELKACEM GRINE R/V. A total of 254 fishing hauls were sampled for the assessment of marine litter. Hauls were performed between 22 and 600 m of depth, the duration was between 30 and 60 min. All sampling was conducted during daylight. After the haul, marine litter was sorted and split from the catch. Then, according to the basis of the MEDITS protocol, litters were sorted into six different categories (plastic, rubber, metal, wood, glass and natural fiber). Thereafter, all marine litter were counted and weighed separately to the nearest 0.5 g. The results shows that the maximums of marine litter densities in the seafloor of the trawling fishing areas along Algerian coast are, respectively, 1996 item/km2 in 2016, 5164 item/km2 in 2019, 2173 item/km2 in 2021 and 7319 item/km2 in 2022. Thus, the plastic is the most abundant litter, it represent, respectively, 46% of marine litter in 2016, 67% in 2019, 69% in 2021 and 74% in 2022. Regarding the weight of the marine litter, it varies between 0.00 and 103 kg in 2016, between 0.04 and 81 kg in 2019, between 0.00 and 68 Kg in 2021 and between 0.00 and 318 kg in 2022. Thus, the maximum rate of marine litter compared to the total catch approximate, respectively, 66% in 2016, 90% in 2019, 65% in 2021 and 91% in 2022. In fact, the average loss in catch is estimated, respectively, at 7.4% in 2016, 8.4% in 2019, 5.7% in 2021 and 6.4% in 2022. However, the bathymetric and geographical variability had a significant impact on both density and weight of marine litter. Marine litter monitoring program is necessary for offering more solution proposals.

Keywords: composition, distribution, seabed, marine litter, algerian coast

Procedia PDF Downloads 60
1344 Using the Minnesota Multiphasic Personality Inventory-2 and Mini Mental State Examination-2 in Cognitive Behavioral Therapy: Case Studies

Authors: Cornelia-Eugenia Munteanu

Abstract:

From a psychological perspective, psychopathology is the area of clinical psychology that has at its core psychological assessment and psychotherapy. In day-to-day clinical practice, psychodiagnosis and psychotherapy are used independently, according to their intended purpose and their specific methods of application. The paper explores how the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and Mini Mental State Examination-2 (MMSE-2) psychological tools contribute to enhancing the effectiveness of cognitive behavioral psychotherapy (CBT). This combined approach, psychotherapy in conjunction with assessment of personality and cognitive functions, is illustrated by two cases, a severe depressive episode with psychotic symptoms and a mixed anxiety-depressive disorder. The order in which CBT, MMPI-2, and MMSE-2 were used in the diagnostic and therapeutic process was determined by the particularities of each case. In the first case, the sequence started with psychotherapy, followed by the administration of blue form MMSE-2, MMPI-2, and red form MMSE-2. In the second case, the cognitive screening with blue form MMSE-2 led to a personality assessment using MMPI-2, followed by red form MMSE-2; reapplication of the MMPI-2 due to the invalidation of the first profile, and finally, psychotherapy. The MMPI-2 protocols gathered useful information that directed the steps of therapeutic intervention: a detailed symptom picture of potentially self-destructive thoughts and behaviors otherwise undetected during the interview. The memory loss and poor concentration were confirmed by MMSE-2 cognitive screening. This combined approach, psychotherapy with psychological assessment, aligns with the trend of adaptation of the psychological services to the everyday life of contemporary man and paves the way for deepening and developing the field.

Keywords: assessment, cognitive behavioral psychotherapy, MMPI-2, MMSE-2, psychopathology

Procedia PDF Downloads 322
1343 The Gold Standard Treatment Plan for Vitiligo: A Review on Conventional and Updated Treatment Methods

Authors: Kritin K. Verma, Brian L. Ransdell

Abstract:

White patches are a symptom of vitiligo, a chronic autoimmune dermatological condition that causes a loss of pigmentation in the skin. Vitiligo can cause issues of self-esteem and quality of life while also progressing the development of other autoimmune diseases. Current treatments in allopathy and homeopathy exist; some treatments have been found to be toxic, whereas others have been helpful. Allopathy has seemed to offer several treatment plans, such as phototherapy, skin lightening preparations, immunosuppressive drugs, combined modality therapy, and steroid medications to improve vitiligo. This presentation will review the FDA-approved topical cream, Opzelura, a JAK inhibitor, and its effects on limiting vitiligo progression. Meanwhile, other non-conventional methods, such as Arsenic Sulphuratum Flavum used in homeopathy, will be debunked based on current literature. Most treatments still serve to arrest progression and induce skin repigmentation. Treatment plans may differ between patients due to depigmentation location on the skin. Since there is no gold standard plan for treating patients with vitiligo, the oral presentation will review all topical and systemic pharmacological therapies that fight the depigmentation of the skin and categorize their validity from a systematic review of the literature. Since treatment plans are limited in nature, all treatment methods will be mentioned and an attempt will be made to make a golden standard treatment process for these patients.

Keywords: vitiligo, phototherapy, immunosuppressive drugs, skin lightening preparations, combined modality therapy, arsenic sulphuratum flavum, homeopathy, allopathy, golden standard, Opzelura

Procedia PDF Downloads 77
1342 Comparison of Anterolateral Thigh Flap with or without Acellular Dermal Matrix in Repair of Hypopharyngeal Squamous Cell Carcinoma Defect: A Retrospective Study

Authors: Yaya Gao, Bing Zhong, Yafeng Liu, Fei Chen

Abstract:

Aim: The purpose of this study was to explore the difference between acellular dermal matrix (ADM) combined with anterolateral thigh (ALT) flap and ALT flap alone. Methods: HSCC patients were treated and divided into group A (ALT) and group B (ALT+ADM) between January 2014 and December 2018. We compared and analyzed the intraoperative information and postoperative outcomes of the patients. Results: There were 21 and 17 patients in group A and group B, respectively. The operation time, blood loss, defect size and anastomotic vessel selection showed no significant difference between two groups. The postoperative complications, including wound bleeding (n=0 vs. 1, p=0.459), wound dehiscence (n=0 vs. 1, p=0.459), wound infection (n=5vs.3, p=0.709), pharyngeal fistula (n=5vs.4, p=1.000) and hypoproteinemia (n=11 vs. 12, p=0.326) were comparable between the groups. Dysphagia at 6 months (number of liquid diets=0vs. 0; number of partial tube feedings=1vs. 1; number of total tube feedings=1vs. 0, p=0.655) also showed no significant differences. However, significant differences was observed in dysphagia at 12 months (number of liquid diets=0vs. 0; number of partial tube feedings=3 vs. 1; number of total tube feedings=10vs. 1, p=0.006). Conclusion: For HSCC patients, the use of the ALT flap combined ADM, compared to ALT treatment, showed better swallowing function at 12 months. The ALT flap combined ADM may serve as a safe and feasible alternative for selected HSCC patients.

Keywords: hypopharyngeal squamous cell carcinoma, anterolateral thigh free flap, acellular dermal matrix, reconstruction, dysphagia

Procedia PDF Downloads 74
1341 Numerical Simulations of Fire in Typical Air Conditioned Railway Coach

Authors: Manoj Sarda, Abhishek Agarwal, Juhi Kaushik, Vatsal Sanjay, Arup Kumar Das

Abstract:

Railways in India remain primary mode of transport having one of the largest networks in the world and catering to billions of transits yearly. Catastrophic economic damage and loss to life is encountered over the past few decades due to fire to locomotives. Study of fire dynamics and fire propagation plays an important role in evacuation planning and reducing losses. Simulation based study of propagation of fire and soot inside an air conditioned coach of Indian locomotive is done in this paper. Finite difference based solver, Fire Dynamic Simulator (FDS) version 6 has been used for analysis. A single air conditioned 3 tier coupe closed to ambient surroundings by glass windows having occupancy for 8 people is the basic unit of the domain. A system of three such coupes combined is taken to be fundamental unit for the entire study to resemble effect to an entire coach. Analysis of flame and soot contours and concentrations is done corresponding to variations in heat release rate per unit volume (HRRPUA) of fire source, variations in conditioned air velocity being circulated inside coupes by vents and an alternate fire initiation and propagation mechanism via ducts. Quantitative results of fractional area in top and front view of the three coupes under fire and smoke are obtained using MATLAB (IMT). Present simulations and its findings will be useful for organizations like Commission of Railway Safety and others in designing and implementing safety and evacuation measures.

Keywords: air conditioned coaches, fire propagation, flame contour, soot flow, train fire

Procedia PDF Downloads 278
1340 Heating of the Ions by Electromagnetic Ion Cyclotron (EMIC) Waves Using Magnetospheric Multiscale (MMS) Satellite Observation

Authors: A. A. Abid

Abstract:

The magnetospheric multiscale (MMS) satellite observations in the inner magnetosphere were used to detect the proton band of the electromagnetic ion cyclotron (EMIC) waves on December 14, 2015, which have been significantly contributing to the dynamics of the magnetosphere. It has been examined that the intensity of EMIC waves gradually increases by decreasing the L shell. The waves are triggered by hot proton thermal anisotropy. The low-energy cold protons (ions) can be activated by the EMIC waves when the EMIC wave intensity is high. As a result, these previously invisible protons are now visible. As a result, the EMC waves also excite the helium ions. The EMIC waves, whose frequency in the magnetosphere of the Earth ranges from 0.001 Hz to 5 Hz, have drawn a lot of attention for their ability to carry energy. Since these waves act as a mechanism for the loss of energetic electrons from the Van Allen radiation belt to the atmosphere, therefore, it is necessary to understand how and where they can be produced, as well as the direction of waves along the magnetic field lines. This work examines how the excitation of EMIC waves is affected by the energy of hot proton temperature anisotropy, and It has a minimum resonance energy of 6.9 keV and a range of 7 to 26 keV. On the hot protons, however, the reverse effect can be seen for energies below the minimum resonance energy. It is demonstrated that throughout the energy range of 1 eV to 100 eV, the number density and temperature anisotropy of the protons likewise rise as the intensity of the EMIC waves increases. Key Points: 1. The analysis of EMIC waves produced by hot proton temperature anisotropy using MMS data. 2. The number density and temperature anisotropy of the cold protons increases owing to high-intensity EMIC waves. 3. The cold protons with an energy range of 1-100eV are energized by EMIC waves using the Magnetospheric Multiscale (MMS) satellite not been discussed before

Keywords: EMIC waves, temperature anisotropy of hot protons, energization of the cold proton, magnetospheric multiscale (MMS) satellite observations

Procedia PDF Downloads 110
1339 Design, Synthesis and Evaluation of 4-(Phenylsulfonamido)Benzamide Derivatives as Selective Butyrylcholinesterase Inhibitors

Authors: Sushil Kumar Singh, Ashok Kumar, Ankit Ganeshpurkar, Ravi Singh, Devendra Kumar

Abstract:

In spectrum of neurodegenerative diseases, Alzheimer’s disease (AD) is characterized by the presence of amyloid β plaques and neurofibrillary tangles in the brain. It results in cognitive and memory impairment due to loss of cholinergic neurons, which is considered to be one of the contributing factors. Donepezil, an acetylcholinesterase (AChE) inhibitor which also inhibits butyrylcholinesterase (BuChE) and improves the memory and brain’s cognitive functions, is the most successful and prescribed drug to treat the symptoms of AD. The present work is based on designing of the selective BuChE inhibitors using computational techniques. In this work, machine learning models were trained using classification algorithms followed by screening of diverse chemical library of compounds. The various molecular modelling and simulation techniques were used to obtain the virtual hits. The amide derivatives of 4-(phenylsulfonamido) benzoic acid were synthesized and characterized using 1H & 13C NMR, FTIR and mass spectrometry. The enzyme inhibition assays were performed on equine plasma BuChE and electric eel’s AChE by method developed by Ellman et al. Compounds 31, 34, 37, 42, 49, 52 and 54 were found to be active against equine BuChE. N-(2-chlorophenyl)-4-(phenylsulfonamido)benzamide and N-(2-bromophenyl)-4-(phenylsulfonamido)benzamide (compounds 34 and 37) displayed IC50 of 61.32 ± 7.21 and 42.64 ± 2.17 nM against equine plasma BuChE. Ortho-substituted derivatives were more active against BuChE. Further, the ortho-halogen and ortho-alkyl substituted derivatives were found to be most active among all with minimal AChE inhibition. The compounds were selective toward BuChE.

Keywords: Alzheimer disease, butyrylcholinesterase, machine learning, sulfonamides

Procedia PDF Downloads 133
1338 Polymer Nanostructures Based Catalytic Materials for Energy and Environmental Applications

Authors: S. Ghosh, L. Ramos, A. N. Kouamé, A.-L. Teillout, H. Remita

Abstract:

Catalytic materials have attracted continuous attention due to their promising applications in a variety of energy and environmental applications including clean energy, energy conversion and storage, purification and separation, degradation of pollutants and electrochemical reactions etc. With the advanced synthetic technologies, polymer nanostructures and nanocomposites can be directly synthesized through soft template mediated approach using swollen hexagonal mesophases and modulate the size, morphology, and structure of polymer nanostructures. As an alternative to conventional catalytic materials, one-dimensional PDPB polymer nanostructures shows high photocatalytic activity under visible light for the degradation of pollutants. These photocatalysts are very stable with cycling. Transmission electron microscopy (TEM), and AFM-IR characterizations reveal that the morphology and structure of the polymer nanostructures do not change after photocatalysis. These stable and cheap polymer nanofibers and metal polymer nanocomposites are easy to process and can be reused without appreciable loss of activity. The polymer nanocomposites formed via one pot chemical redox reaction with 3.4 nm Pd nanoparticles on poly(diphenylbutadiyne) (PDPB) nanofibers (30 nm). The reduction of Pd (II) ions is accompanied by oxidative polymerization leading to composites materials. Hybrid Pd/PDPB nanocomposites used as electrode materials for the electrocatalytic oxidation of ethanol without using support of proton exchange Nafion membrane. Hence, these conducting polymer nanofibers and nanocomposites offer the perspective of developing a new generation of efficient photocatalysts for environmental protection and in electrocatalysis for fuel cell applications.

Keywords: conducting polymer, swollen hexagonal mesophases, solar photocatalysis, electrocatalysis, water depollution

Procedia PDF Downloads 378
1337 Performance the SOFA and APACHEII Scoring System to Predicate the Mortality of the ICU Cases

Authors: Yu-Chuan Huang

Abstract:

Introduction: There is a higher mortality rate for unplanned transfer to intensive care units. It also needs a longer length of stay and makes the intensive care unit beds cannot be effectively used. It affects the immediate medical treatment of critically ill patients, resulting in a drop in the quality of medical care. Purpose: The purpose of this study was using SOFA and APACHEII score to analyze the mortality rate of the cases transferred from ED to ICU. According to the score that should be provide an appropriate care as early as possible. Methods: This study was a descriptive experimental design. The sample size was estimated at 220 to reach a power of 0.8 for detecting a medium effect size of 0.30, with a 0.05 significance level, using G-power. Considering an estimated follow-up loss, the required sample size was estimated as 242 participants. Data were calculated by medical system of SOFA and APACHEII score that cases transferred from ED to ICU in 2016. Results: There were 233 participants meet the study. The medical records showed 33 participants’ mortality. Age and sex with QSOFA , SOFA and sex with APACHEII showed p>0.05. Age with APCHHII in ED and ICU showed r=0.150, 0,268 (p < 0.001**). The score with mortality risk showed: ED QSOFA is r=0.235 (p < 0.001**), exp(B)=1.685(p = 0.007); ICU SOFA 0.78 (p < 0.001**), exp(B)=1.205(p < 0.001). APACHII in ED and ICU showed r= 0.253, 0.286 (p < 0.001**), exp(B) = 1.041,1.073(p = 0.017,0.001). For SOFA, a cutoff score of above 15 points was identified as a predictor of the 95% mortality risk. Conclusions: The SOFA and APACHE II were calculated based on initial laboratory data in the Emergency Department, and during the first 24 hours of ICU admission. In conclusion, the SOFA and APACHII score is significantly associated with mortality and strongly predicting mortality. Early predictors of morbidity and mortality, which we can according the predicting score, and provide patients with a detail assessment and proper care, thereby reducing mortality and length of stay.

Keywords: SOFA, APACHEII, mortality, ICU

Procedia PDF Downloads 142
1336 Effect of Species and Slaughtering Age on Quality Characteristics of Different Meat Cuts of Humped Cattle and Water Buffalo Bulls

Authors: Muhammad Kashif Yar, Muhammad Hayat Jaspal, Muawuz Ijaz, Zafar Hayat, Iftikhar Hussain Badar, Jamal Nasir

Abstract:

Meat quality characteristics such as ultimate pH (pHu), color, cooking loss and shear force of eight wholesale meat cuts of humped cattle (Bos indicus) and water buffalo (Bubalus bubalis) bulls at two age groups were evaluated. A total of 48 animals, 24 of each species and within species 12 from each 18 and 26 months age group were slaughtered. After 24h post-slaughter, eight meat cuts, i.e., tenderloin, sirloin, rump, cube roll, round, topside, silverside and blade were cut from the carcass. The pHu of tenderloin (5.65 vs 5.55), sirloin (5.67 vs 5.60), cube roll (5.68 vs 5.62) and blade (5.88 vs 5.72) was significantly higher (P<0.05) in buffalo than cattle. The tenderloin showed significantly higher (44.63 vs 42.23) and sirloin showed lower (P<0.05) mean L* value (42.28 vs 44.47) in cattle than buffalo whilst the mean L* value of the only tenderloin was affected by animal age. Species had a significant (P<0.05) effect on mean a*, b*, C, and h values of all meat cuts. The shear force of the majority of meat cuts, within species and age groups, varied considerably. The mean shear values of tenderloin, sirloin, cube roll and blade were higher (P<0.05) in buffalo than cattle. The shear values of rump, round, topside and silverside increased significantly (P<0.05) with animal age. In conclusion, primal cuts of cattle showed better meat quality especially tenderness than buffalo. Furthermore, calves should be raised at least up to 26 months of age to maximize profitability by providing better quality meat.

Keywords: buffalo, cattle, meat color, meat quality, slaughtering age, tenderness

Procedia PDF Downloads 141
1335 Integration of GIS with Remote Sensing and GPS for Disaster Mitigation

Authors: Sikander Nawaz Khan

Abstract:

Natural disasters like flood, earthquake, cyclone, volcanic eruption and others are causing immense losses to the property and lives every year. Current status and actual loss information of natural hazards can be determined and also prediction for next probable disasters can be made using different remote sensing and mapping technologies. Global Positioning System (GPS) calculates the exact position of damage. It can also communicate with wireless sensor nodes embedded in potentially dangerous places. GPS provide precise and accurate locations and other related information like speed, track, direction and distance of target object to emergency responders. Remote Sensing facilitates to map damages without having physical contact with target area. Now with the addition of more remote sensing satellites and other advancements, early warning system is used very efficiently. Remote sensing is being used both at local and global scale. High Resolution Satellite Imagery (HRSI), airborne remote sensing and space-borne remote sensing is playing vital role in disaster management. Early on Geographic Information System (GIS) was used to collect, arrange, and map the spatial information but now it has capability to analyze spatial data. This analytical ability of GIS is the main cause of its adaption by different emergency services providers like police and ambulance service. Full potential of these so called 3S technologies cannot be used in alone. Integration of GPS and other remote sensing techniques with GIS has pointed new horizons in modeling of earth science activities. Many remote sensing cases including Asian Ocean Tsunami in 2004, Mount Mangart landslides and Pakistan-India earthquake in 2005 are described in this paper.

Keywords: disaster mitigation, GIS, GPS, remote sensing

Procedia PDF Downloads 472
1334 Limiting Freedom of Expression to Fight Radicalization: The 'Silencing' of Terrorists Does Not Always Allow Rights to 'Speak Loudly'

Authors: Arianna Vedaschi

Abstract:

This paper addresses the relationship between freedom of expression, national security and radicalization. Is it still possible to talk about a balance between the first two elements? Or, due to the intrusion of the third, is it more appropriate to consider freedom of expression as “permanently disfigured” by securitarian concerns? In this study, both the legislative and the judicial level are taken into account and the comparative method is employed in order to provide the reader with a complete framework of relevant issues and a workable set of solutions. The analysis moves from the finding according to which the tension between free speech and national security has become a major issue in democratic countries, whose very essence is continuously endangered by the ever-changing and multi-faceted threat of international terrorism. In particular, a change in terrorist groups’ recruiting pattern, attracting more and more people by way of a cutting-edge communicative strategy, often employing sophisticated technology as a radicalization tool, has called on law-makers to modify their approach to dangerous speech. While traditional constitutional and criminal law used to punish speech only if it explicitly and directly incited the commission of a criminal action (“cause-effect” model), so-called glorification offences – punishing mere ideological support for terrorism, often on the web – are becoming commonplace in the comparative scenario. Although this is direct, and even somehow understandable, consequence of the impending terrorist menace, this research shows many problematic issues connected to such a preventive approach. First, from a predominantly theoretical point of view, this trend negatively impacts on the already blurred line between permissible and prohibited speech. Second, from a pragmatic point of view, such legislative tools are not always suitable to keep up with ongoing developments of both terrorist groups and their use of technology. In other words, there is a risk that such measures become outdated even before their application. Indeed, it seems hard to still talk about a proper balance: what was previously clearly perceived as a balancing of values (freedom of speech v. public security) has turned, in many cases, into a hierarchy with security at its apex. In light of these findings, this paper concludes that such a complex issue would perhaps be better dealt with through a combination of policies: not only criminalizing ‘terrorist speech,’ which should be relegated to a last resort tool, but acting at an even earlier stage, i.e., trying to prevent dangerous speech itself. This might be done by promoting social cohesion and the inclusion of minorities, so as to reduce the probability of people considering terrorist groups as a “viable option” to deal with the lack of identification within their social contexts.

Keywords: radicalization, free speech, international terrorism, national security

Procedia PDF Downloads 195
1333 Trading off Accuracy for Speed in Powerdrill

Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica

Abstract:

In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.

Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries

Procedia PDF Downloads 254
1332 The Study of Climate Change Effects on the Performance of Thermal Power Plants in Iran

Authors: Masoud Soltani Hosseini, Fereshteh Rahmani, Mohammad Tajik Mansouri, Ali Zolghadr

Abstract:

Climate change is accompanied with ambient temperature increase and water accessibility limitation. The main objective of this paper is to investigate the effects of climate change on thermal power plants including gas turbines, steam and combined cycle power plants in Iran. For this purpose, the ambient temperature increase and water accessibility will be analyzed and their effects on power output and efficiency of thermal power plants will be determined. According to the results, the ambient temperature has high effect on steam power plants with indirect cooling system (Heller). The efficiency of this type of power plants decreases by 0.55 percent per 1oC ambient temperature increase. This amount is 0.52 and 0.2 percent for once-through and wet cooling systems, respectively. The decrease in power output covers a range of 0.2% to 0.65% for steam power plant with wet cooling system and gas turbines per 1oC air temperature increase. Based on the thermal power plants distribution in Iran and different scenarios of climate change, the total amount of power output decrease falls between 413 and 1661 MW due to ambient temperature increase. Another limitation incurred by climate change is water accessibility. In optimistic scenario, the power output of steam plants decreases by 1450 MW in dry and hot climate areas throughout next decades. The remaining scenarios indicate that the amount of decrease in power output would be by 4152 MW in highlands and cold climate. Therefore, it is necessary to consider appropriate solutions to overcome these limitations. Considering all the climate change effects together, the actual power output falls in range of 2465 and 7294 MW and efficiency loss covers the range of 0.12 to .56 % in different scenarios.

Keywords: climate, change, thermal, power plants

Procedia PDF Downloads 71
1331 Investigation of the Possible Correlation of Earthquakes with a Red Tide Occurrence in the Persian Gulf and Oman Sea

Authors: Hadis Hosseinzadehnaseri

Abstract:

The red tide is a kind of algae blooming, caused different problems at different sizes for the human life and the environment, so it has become one of the serious global concerns in the field of Oceanography in few recent decades. This phenomenon has affected on Iran's water, especially the Persian Gulf's since last few years. Collecting data associated with this phenomenon and comparison in different parts of the world is significant as a practical way to study this phenomenon and controlling it. Effective factors to occur this phenomenon lead to the increase of the required nutrients of the algae or provide a good environment for blooming. In this study, we examined the probability of relation between the earthquake and the harmful algae blooming in the Persian Gulf's water through comparing the earthquake data and the recorded Red tides. On the one hand, earthquakes can cause changes in seawater temperature that is effective in creating a suitable environment and the other hand, it increases the possibility of water nutrients, and its transportation in the seabed, so it can play a principal role in the development of red tide occurrence. Comparing the distribution spatial-temporal maps of the earthquakes and deadly red tides in the Persian Gulf and Oman Sea, confirms the hypothesis, why there is a meaningful relation between these two distributions. Comparing the number of earthquakes around the world as well as the number of the red tides in many parts of the world indicates the correlation between these two issues. This subject due to numerous earthquakes, especially in recent years and in the southern part of the country should be considered as a warning to the possibility of re-occurrence of a critical state of red tide in a large scale, why in the year 2008, the number of recorded earthquakes have been more than near years. In this year, the distribution value of the red tide phenomenon in the Persian Gulf got measured about 140,000 square kilometers and entire Oman Sea, with 10 months Survival in the area, which is considered as a record among the occurred algae blooming in the world. In this paper, we could obtain a logical and reasonable relation between the earthquake frequency and this phenomenon occurrence, through compilation of statistics relating to the earthquakes in the southern Iran, from 2000 to the end of the first half of 2013 and also collecting statistics on the occurrence of red tide in the region as well as examination of similar data in different parts of the world. As shown in Figure 1, according to a survey conducted on the earthquake data, the most earthquakes in the southern Iran ranks first in the fourth Gregorian calendar month In April, coincided with Ordibehesht and Khordad in Persian calendar and then in the tenth Gregorian calendar month In October, coincided in Aban and Azar in Persian calendar.

Keywords: red tide, earth quake, persian gulf, harmful algae bloom

Procedia PDF Downloads 489
1330 Revisiting the Link between Corporate Social Performance and Corporate Financial Performance Post 2008 Global Economic Crisis

Authors: Anand Choudhary

Abstract:

Following the global economic crisis in 2008, businesses and more especially the big multinational conglomerates were increasingly viewed by the people world over as one of the major causes of the economic problems faced by millions globally, in terms of job loss and lifetime savings being wiped out as banks and pension funds went bankrupt and people stared at an insecure financial future. This caused a lot of resentment in the public against big businesses and fueled several protest movements by the people such as “Occupy Wall Street” in different parts of the world. This forced the big businesses to respond to the challenge by adopting more people-centric policies and initiatives for local communities in societies where they operate as part of their corporate social responsibility (CSR), in order to regain their social acceptance among the people whilst earning their ‘social license to operate’. The current paper studies many of such large MNCs across the United States of America, India and South Africa, which changed the way they did business earlier, following the global economic crisis in 2008, by incorporating capacity building initiatives for local communities as part of their CSR strategy and explores whether it has contributed to improving their financial performance. It is a conceptual research paper using secondary source data. The findings reveal that there is a positive correlation between the companies’ corporate social performance and corporate financial performance. In addition, the findings also bring to light that the MNCs examined as part of the current paper have improved their image in the eyes of their stakeholders following the change in their CSR strategy and initiatives.

Keywords: corporate social responsibility (CSR), Corporate Social Performance (CSP), Corporate Financial Performance (CFP), local communities

Procedia PDF Downloads 330
1329 The Role Played by Awareness and Complexity through the Use of a Logistic Regression Analysis

Authors: Yari Vecchio, Margherita Masi, Jorgelina Di Pasquale

Abstract:

Adoption of Precision Agriculture (PA) is involved in a multidimensional and complex scenario. The process of adopting innovations is complex and social inherently, influenced by other producers, change agents, social norms and organizational pressure. Complexity depends on factors that interact and influence the decision to adopt. Farm and operator characteristics, as well as organizational, informational and agro-ecological context directly affect adoption. This influence has been studied to measure drivers and to clarify 'bottlenecks' of the adoption of agricultural innovation. Making decision process involves a multistage procedure, in which individual passes from first hearing about the technology to final adoption. Awareness is the initial stage and represents the moment in which an individual learns about the existence of the technology. 'Static' concept of adoption has been overcome. Awareness is a precondition to adoption. This condition leads to not encountering some erroneous evaluations, arose from having carried out analysis on a population that is only in part aware of technologies. In support of this, the present study puts forward an empirical analysis among Italian farmers, considering awareness as a prerequisite for adoption. The purpose of the present work is to analyze both factors that affect the probability to adopt and determinants that drive an aware individual to not adopt. Data were collected through a questionnaire submitted in November 2017. A preliminary descriptive analysis has shown that high levels of adoption have been found among younger farmers, better educated, with high intensity of information, with large farm size and high labor-intensive, and whose perception of the complexity of adoption process is lower. The use of a logit model permits to appreciate the weight played by the intensity of labor and complexity perceived by the potential adopter in PA adoption process. All these findings suggest important policy implications: measures dedicated to promoting innovation will need to be more specific for each phase of this adoption process. Specifically, they should increase awareness of PA tools and foster dissemination of information to reduce the degree of perceived complexity of the adoption process. These implications are particularly important in Europe where is pre-announced the reform of Common Agricultural Policy, oriented to innovation. In this context, these implications suggest to the measures supporting innovation to consider the relationship between various organizational and structural dimensions of European agriculture and innovation approaches.

Keywords: adoption, awareness, complexity, precision agriculture

Procedia PDF Downloads 134