Search results for: industry implementation
466 Parameter Selection and Monitoring for Water-Powered Percussive Drilling in Green-Fields Mineral Exploration
Authors: S. J. Addinell, T. Richard, B. Evans
Abstract:
The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising downhole water powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barron cover. This system has shown superior rates of penetration in water-rich hard rock formations at depths exceeding 500 meters. Several key challenges exist regarding the deployment and use of these bottom hole assemblies for mineral exploration, and this paper discusses some of the key technical challenges. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process is presented and shows a strong power law relationship for particle size distributions. Several percussive drilling parameters such as RPM, applied fluid pressure and weight on bit have been shown to influence the particle size distributions of the cuttings generated. This has direct influence on other drilling parameters such as flow loop performance, cuttings dewatering, and solids control. Real-time, accurate knowledge of percussive system operating parameters will assist the driller in maximising the efficiency of the drilling process. The applied fluid flow, fluid pressure, and rock properties are known to influence the natural oscillating frequency of the percussive hammer, but this paper also shows that drill bit design, drill bit wear and the applied weight on bit can also influence the oscillation frequency. Due to the changing drilling conditions and therefore changing operating parameters, real-time understanding of the natural operating frequency is paramount to achieving system optimisation. Several techniques to understand the oscillating frequency have been investigated and presented. With a conventional top drive drilling rig, spectral analysis of applied fluid pressure, hydraulic feed force pressure, hold back pressure and drill string vibrations have shown the presence of the operating frequency of the bottom hole tooling. Unfortunately, however, with the implementation of a coiled tubing drilling rig, implementing a positive displacement downhole motor to provide drill bit rotation, these signals are not available for interrogation at the surface and therefore another method must be considered. The investigation and analysis of ground vibrations using geophone sensors, similar to seismic-while-drilling techniques have indicated the presence of the natural oscillating frequency of the percussive hammer. This method is shown to provide a robust technique for the determination of the downhole percussive oscillation frequency when used with a coiled tubing drill rig.Keywords: cuttings characterization, drilling optimization, oscillation frequency, percussive drilling, spectral analysis
Procedia PDF Downloads 230465 Collaborative Procurement in the Pursuit of Net- Zero: A Converging Journey
Authors: Bagireanu Astrid, Bros-Williamson Julio, Duncheva Mila, Currie John
Abstract:
The Architecture, Engineering, and Construction (AEC) sector plays a critical role in the global transition toward sustainable and net-zero built environments. However, the industry faces unique challenges in planning for net-zero while struggling with low productivity, cost overruns and overall resistance to change. Traditional practices fall short due to their inability to meet the requirements for systemic change, especially as governments increasingly demand transformative approaches. Working in silos and rigid hierarchies and a short-term, client-centric approach prioritising immediate gains over long-term benefit stands in stark contrast to the fundamental requirements for the realisation of net-zero objectives. These practices have limited capacity to effectively integrate AEC stakeholders and promote the essential knowledge sharing required to address the multifaceted challenges of achieving net-zero. In the context of built environment, procurement may be described as the method by which a project proceeds from inception to completion. Collaborative procurement methods under the Integrated Practices (IP) umbrella have the potential to align more closely with net-zero objectives. This paper explores the synergies between collaborative procurement principles and the pursuit of net zero in the AEC sector, drawing upon the shared values of cross-disciplinary collaboration, Early Supply Chain involvement (ESI), use of standards and frameworks, digital information management, strategic performance measurement, integrated decision-making principles and contractual alliancing. To investigate the role of collaborative procurement in advancing net-zero objectives, a structured research methodology was employed. First, the study focuses on a systematic review on the application of collaborative procurement principles in the AEC sphere. Next, a comprehensive analysis is conducted to identify common clusters of these principles across multiple procurement methods. An evaluative comparison between traditional procurement methods and collaborative procurement for achieving net-zero objectives is presented. Then, the study identifies the intersection between collaborative procurement principles and the net-zero requirements. Lastly, an exploration of key insights for AEC stakeholders focusing on the implications and practical applications of these findings is made. Directions for future development of this research are recommended. Adopting collaborative procurement principles can serve as a strategic framework for guiding the AEC sector towards realising net-zero. Synergising these approaches overcomes fragmentation, fosters knowledge sharing, and establishes a net-zero-centered ecosystem. In the context of the ongoing efforts to amplify project efficiency within the built environment, a critical realisation of their central role becomes imperative for AEC stakeholders. When effectively leveraged, collaborative procurement emerges as a powerful tool to surmount existing challenges in attaining net-zero objectives.Keywords: collaborative procurement, net-zero, knowledge sharing, architecture, built environment
Procedia PDF Downloads 74464 Immune Responses and Pathological Manifestations in Chicken to Oral Infection with Salmonella typhimurium
Authors: Mudasir Ahmad Syed, Raashid Ahmd Wani, Mashooq Ahmad Dar, Uneeb Urwat, Riaz Ahmad Shah, Nazir Ahmad Ganai
Abstract:
Salmonella enterica serovar Typhimurium (Salmonella Typhimurium) is a primary avian pathogen responsible for severe intestinal pathology in younger chickens and economic losses. However, the Salmonella Typhimurium is also able to cause infection in humans, described by typhoid fever and acute gastro-intestinal disease. A study was conducted at days to investigate pathological, histopathological, haemato-biochemical, immunological and expression kinetics of NRAMP (natural resistance associated macrophage protein) gene family (NRAMP1 and NRAMP2) in broiler chickens following experimental infection of Salmonella Typhimurium at 0,1,3,5,7,9,11,13 and 15 days respectively. Infection was developed in birds through oral route at 2×108 CFU/ml. Clinical symptoms appeared 4 days post infection (dpi) and after one-week birds showed progressive weakness, anorexia, diarrhea and lowering of head. On postmortem examination, liver showed congestion, hemorrhage and necrotic foci on surface, while as spleen, lungs and intestines revealed congestion and hemorrhages. Histopathological alterations were principally observed in liver in second week post infection. Changes in liver comprised of congestion, areas of necrosis, reticular endothelial hyperplasia in association with mononuclear cell and heterophilic infiltration. Hematological studies confirm a significant decrease (P<0.05) in RBC count, Hb concentration and PCV. White blood cell count showed significant increase throughout the experimental study. An increase in heterophils was found up to 7dpi and a decreased pattern was observed afterwards. Initial lymphopenia followed by lymphocytosis was found in infected chicks. Biochemical studies showed a significant increase in glucose, AST and ALT concentration and a significant decrease (P<0.05) in total protein and albumin level in the infected group. Immunological studies showed higher titers of IgG in infected group as compared to control group. The real time gene expression of NRAMPI and NRAMP2 genes increased significantly (P<0.05) in infected group as compared to controls. The peak expression of NRAMP1 gene was seen in liver, spleen and caecum of infected birds at 3dpi, 5dpi and 7dpi respectively, while as peak expression of NRAMP2 gene in liver, spleen and caecum of infected chicken was seen at 9dpi, 5dpi and 9dpi respectively. This study has role in diagnostics and prognostics in the poultry industry for the detection of salmonella infections at early stages of poultry development.Keywords: biochemistry, histopathology, NRAMP, poultry, real time expression, Salmonella Typhimurium
Procedia PDF Downloads 332463 Initiating the Provision of Adolescent Reproductive Health Information and Services (ARHIS) to Communities in Quezon City, Beginning with District 2
Authors: Erickson Bernardo, Caridad Pineda
Abstract:
The project Adolescent Reproductive Health Information and Services (ARHIS) is a nine-month pilot project which intends to bridge the existing gap between reproductive health information and services, particularly with regard to family planning and HIV, among adolescent boys and girls aged 10-19 years in the 2nd Congressional District of Quezon City, in the Philippines. It aims to increase adolescents' and young people's awareness about their reproductive health concerns and at the same time make a wide range of reproductive health (RH) services accessible and available to them. A number of methodologies were utilized in the implementation of the project. At the onset, a baseline survey was conducted by community mobilizers to gather a situational analysis of adolescents' and young people's issues and concerns. The results of this survey were then presented in a multi-stakeholders' meeting to gather community support and foster their involvement. Further, interactive learning sessions (ILS) on a variety of reproductive health topics, among young people, parents and community leaders based on the results of the baseline survey was conducted. With regard to reproductive health service provision, both facility-based delivery and conduct of outreach activities were employed. In the span of nine months, the project was able to yield the following results: • A total of 521 adolescents and youth (AY) were reached by ILS on puberty, responsible relationships, teenage pregnancy, family planning, as well as HIV & AIDS. • A total of 218 parents and community leaders were informed of AY RH-related issues and concerns. • More than 350 AYs availed of a wide range of FP services including pills – both combined oral and progestin-only, and progestin-only injectables and implants. • More than 380 AYs availed of condoms as means of STI and HIV prevention. A noble initiative of the project is the utilization of a "condom distributor", a youth leader who has been educated about STI and HIV prevention as well as correct condom use, as the focal point for condom access in the community. • A total of 25 young people, parents, and community leaders were identified as ARHIS champions who have been instrumental in the achievement of project deliverables through their dedication and commitment to support the project. The concept of adolescent sexual and reproductive health (ASRH) remains to be a major challenge in the Philippine context. This is due to the fact that majority of Filipinos are still not keen on discussing issues and concerns related to ASRH, albeit the alarming number of teenage pregnancies and the rapid increase of HIV cases among 15- 24 year olds. In addition, Republic Act 10354 or the Responsible Parenthood and Reproductive Health Act of 2012, requires minor adolescents to present a written parental consent prior to accessing RH services. However, with the involvement and support of parents and key community stakeholders, these barriers may be addressed. The project has demonstrated how adolescents and young people yearn for reproductive health information and services.Keywords: adolescent sexual reproductive health, barriers to access, reproductive health information and services, teenage pregnancies
Procedia PDF Downloads 178462 Graphene-Graphene Oxide Dopping Effect on the Mechanical Properties of Polyamide Composites
Authors: Daniel Sava, Dragos Gudovan, Iulia Alexandra Gudovan, Ioana Ardelean, Maria Sonmez, Denisa Ficai, Laurentia Alexandrescu, Ecaterina Andronescu
Abstract:
Graphene and graphene oxide have been intensively studied due to the very good properties, which are intrinsic to the material or come from the easy doping of those with other functional groups. Graphene and graphene oxide have known a broad band of useful applications, in electronic devices, drug delivery systems, medical devices, sensors and opto-electronics, coating materials, sorbents of different agents for environmental applications, etc. The board range of applications does not come only from the use of graphene or graphene oxide alone, or by its prior functionalization with different moieties, but also it is a building block and an important component in many composite devices, its addition coming with new functionalities on the final composite or strengthening the ones that are already existent on the parent product. An attempt to improve the mechanical properties of polyamide elastomers by compounding with graphene oxide in the parent polymer composition was attempted. The addition of the graphene oxide contributes to the properties of the final product, improving the hardness and aging resistance. Graphene oxide has a lower hardness and textile strength, and if the amount of graphene oxide in the final product is not correctly estimated, it can lead to mechanical properties which are comparable to the starting material or even worse, the graphene oxide agglomerates becoming a tearing point in the final material if the amount added is too high (in a value greater than 3% towards the parent material measured in mass percentages). Two different types of tests were done on the obtained materials, the hardness standard test and the tensile strength standard test, and they were made on the obtained materials before and after the aging process. For the aging process, an accelerated aging was used in order to simulate the effect of natural aging over a long period of time. The accelerated aging was made in extreme heat. For all materials, FT-IR spectra were recorded using FT-IR spectroscopy. From the FT-IR spectra only the bands corresponding to the polyamide were intense, while the characteristic bands for graphene oxide were very small in comparison due to the very small amounts introduced in the final composite along with the low absorptivity of the graphene backbone and limited number of functional groups. In conclusion, some compositions showed very promising results, both in tensile strength test and in hardness tests. The best ratio of graphene to elastomer was between 0.6 and 0.8%, this addition extending the life of the product. Acknowledgements: The present work was possible due to the EU-funding grant POSCCE-A2O2.2.1-2013-1, Project No. 638/12.03.2014, code SMIS-CSNR 48652. The financial contribution received from the national project ‘New nanostructured polymeric composites for centre pivot liners, centre plate and other components for the railway industry (RONERANANOSTRUCT)’, No: 18 PTE (PN-III-P2-2.1-PTE-2016-0146) is also acknowledged.Keywords: graphene, graphene oxide, mechanical properties, dopping effect
Procedia PDF Downloads 316461 Premature Departure of Active Women from the Working World: One Year Retrospective Study in the Tunisian Center
Authors: Lamia Bouzgarrou, Amira Omrane, Malika Azzouzi, Asma Kheder, Amira Saadallah, Ilhem Boussarsar, Kamel Rejeb
Abstract:
Introduction: Increasing the women’s labor force participation is a political issue in countries with developed economies and those with low growth prospects. However, in the labor market, women continue to face several obstacles, either for the integration or for the maintenance at work. This study aims to assess the prevalence of premature withdrawal from working life -due to invalidity or medical justified early retirement- among active women in the Tunisian center and to identify its determinants. Material and methods: We conducted a cross-sectional study, over one year, focusing on the agreement for invalidity or early retirement for premature usury of the body- delivered by the medical commission of the National Health Insurance Fund (CNAM) in the central Tunisian district. We exhaustively selected women's files. Data related to Socio-demographic characteristics, professional and medical ones, were collected from the CNAM's administrative and medical files. Results: During the period of one year, 222 women have had an agreement for premature departure of their professional activity. Indeed, 149 women (67.11%) benefit of from invalidity agreement and 20,27% of them from favorable decision for early retirement. The average age was 50 ± 6 years with extremes of 23 and 62 years, and 18.9% of women were under 45 years. Married women accounted for 69.4% and 59.9% of them had at least one dependent child in charge. The average professional seniority in the sector was 23 ± 8 years. The textile-clothing sector was the most affected, with 70.7% of premature departure. Medical reasons for withdrawal from working life were mainly related to neuro-degenerative diseases in 46.8% of cases, rheumatic ones in 35.6% of cases and cardiovascular diseases in 22.1% of them. Psychiatric and endocrine disorders motivated respectively 17.1% and 13.5% of these departures. The evaluation of the sequels induced by these pathologies concluded to an average permanent partial disability equal to 61.4 ± 17.3%. The analytical study concluded that the agreement of disability or early retirement was correlated with the insured ‘age (p = 10-3), the professional seniority (p = 0.003) and the permanent partial incapacity (PPI) rate assessed by the expert physician (p = 0.04). No other social or professional factors were correlated with this decision. Conclusion: Despite many advances in labour law and Tunisian legal text on employability, women still exposed to several social and professional inequalities (payment inequality, precarious work ...). Indeed, women are often pushed to accept working in adverse conditions, thus they are more vulnerable to develop premature wear on the body and being forced to premature departures from the world of work. These premature withdrawals from active life are not only harmful to the concerned women themselves, but also associated with considerable costs for the insurance organism and the society. In order to ensure maintenance at work for women, a political commitment is imperative in the implementation of global prevention strategies and the improvement of working conditions, particularly in our socio-cultural context.Keywords: Active Women , Early Retirement , Invalidity , Maintenance at Work
Procedia PDF Downloads 153460 ‘Call Before, Save Lives’: Reducing Emergency Department Visits through Effective Communication
Authors: Sandra Cardoso, Gaspar Pais, Judite Neves, Sandra Cavaca, Fernando Araújo
Abstract:
In 2021, Portugal has 63 emergency department (ED) visits per 100 people annually, the highest numbers in Europe. While EDs provide a critical service, high use is indicative of inappropriate and inefficient healthcare. In Portugal, all ED have the Manchester Triage System (MTS), a clinical risk management tool to enable that patients are seen in order of clinical priority. In 2023, more than 40% of the ED visits were of non-urgent conditions (blue and green), that could be better managed in primary health care (PHC), meaning wrong use of resources and lack of health literacy. From 2017, the country has a phone line, SNS24 (Contact Centre of the National Health Service), for triage, counseling, and referral service, 24 hours/7 days a week. The pilot project ‘Call before, save lives’ was implemented in the municipalities of Póvoa de Varzim and Vila do Conde (around 150.000 residents), in May 2023, by the executive board of the Portuguese Health Service, with the support of the Shared Services of the Ministry of Health, and local authorities. This geographical area has short travel times, 99% of the population a family doctor and the region is organized in a health local unit (HLU), integrating PHC and the local hospital. The purposes of this project included to increase awareness to contact SNS 24, before going to an ED, and non-urgent conditions oriented to a family doctor, reducing ED visits. The implementation of the project involved two phases, beginning with: i) development of campaigns using local influencers (fishmonger, model, fireman) through local institutions and media; ii) provision of telephone installed on site to contact SNS24; iii) establishment of open consultation in PHC; iv) promotion of the use of SNS24; v) creation of acute consultations at the hospital for complex chronic patients; and vi) direct referral for home hospitalization by PHC. The results of this project showed an excellent level of access to SNS24, an increase in the number of users referred to ED, with great satisfaction of users and professionals. The second phase, initiated in January 2024, for access to the ED, the need for prior referral was established as an admission rule, except for certain situations, as trauma patients. If the patient refuses, their registration in the ED and subsequent screening in accordance with the MTS must be ensured. When the patient is non-urgent, shall not be observed in the ED, provided that, according to his clinical condition, is guaranteed to be referred to PHC or to consultation/day hospital, through effective scheduling of an appointment for the same or the following day. In terms of results, 8 weeks after beginning of phase 2, we assist of a decrease in self-reported patients to ED from 59% to 15%, and a reduction of around 7% of ED visits. The key for this success was an effective public campaign that increases the knowledge of the right use of the health system, and capable of changing behaviors.Keywords: contact centre of the national health service, emergency department visits, public campaign, health literacy, SNS24
Procedia PDF Downloads 69459 Comparison of Non-destructive Devices to Quantify the Moisture Content of Bio-Based Insulation Materials on Construction Sites
Authors: Léa Caban, Lucile Soudani, Julien Berger, Armelle Nouviaire, Emilio Bastidas-Arteaga
Abstract:
Improvement of the thermal performance of buildings is a high concern for the construction industry. With the increase in environmental issues, new types of construction materials are being developed. These include bio-based insulation materials. They capture carbon dioxide, can be produced locally, and have good thermal performance. However, their behavior with respect to moisture transfer is still facing some issues. With a high porosity, the mass transfer is more important in those materials than in mineral insulation ones. Therefore, they can be more sensitive to moisture disorders such as mold growth, condensation risks or decrease of the wall energy efficiency. For this reason, the initial moisture content on the construction site is a piece of crucial knowledge. Measuring moisture content in a laboratory is a mastered task. Diverse methods exist but the easiest and the reference one is gravimetric. A material is weighed dry and wet, and its moisture content is mathematically deduced. Non-destructive methods (NDT) are promising tools to determine in an easy and fast way the moisture content in a laboratory or on construction sites. However, the quality and reliability of the measures are influenced by several factors. Classical NDT portable devices usable on-site measure the capacity or the resistivity of materials. Water’s electrical properties are very different from those of construction materials, which is why the water content can be deduced from these measurements. However, most moisture meters are made to measure wooden materials, and some of them can be adapted for construction materials with calibration curves. Anyway, these devices are almost never calibrated for insulation materials. The main objective of this study is to determine the reliability of moisture meters in the measurement of biobased insulation materials. The determination of which one of the capacitive or resistive methods is the most accurate and which device gives the best result is made. Several biobased insulation materials are tested. Recycled cotton, two types of wood fibers of different densities (53 and 158 kg/m3) and a mix of linen, cotton, and hemp. It seems important to assess the behavior of a mineral material, so glass wool is also measured. An experimental campaign is performed in a laboratory. A gravimetric measurement of the materials is carried out for every level of moisture content. These levels are set using a climatic chamber and by setting the relative humidity level for a constant temperature. The mass-based moisture contents measured are considered as references values, and the results given by moisture meters are compared to them. A complete analysis of the uncertainty measurement is also done. These results are used to analyze the reliability of moisture meters depending on the materials and their water content. This makes it possible to determine whether the moisture meters are reliable, and which one is the most accurate. It will then be used for future measurements on construction sites to assess the initial hygrothermal state of insulation materials, on both new-build and renovation projects.Keywords: capacitance method, electrical resistance method, insulation materials, moisture transfer, non-destructive testing
Procedia PDF Downloads 127458 Preparation of Biodegradable Methacrylic Nanoparticles by Semicontinuous Heterophase Polymerization for Drugs Loading: The Case of Acetylsalicylic Acid
Authors: J. Roberto Lopez, Hened Saade, Graciela Morales, Javier Enriquez, Raul G. Lopez
Abstract:
Implementation of systems based on nanostructures for drug delivery applications have taken relevance in recent studies focused on biomedical applications. Although there are several nanostructures as drugs carriers, the use of polymeric nanoparticles (PNP) has been widely studied for this purpose, however, the main issue for these nanostructures is the size control below 50 nm with a narrow distribution size, due to they must go through different physiological barriers and avoid to be filtered by kidneys (< 10 nm) or the spleen (> 100 nm). Thus, considering these and other factors, it can be mentioned that drug-loaded nanostructures with sizes varying between 10 and 50 nm are preferred in the development and study of PNP/drugs systems. In this sense, the Semicontinuous Heterophase Polymerization (SHP) offers the possibility to obtain PNP in the desired size range. Considering the above explained, methacrylic copolymer nanoparticles were obtained under SHP. The reactions were carried out in a jacketed glass reactor with the required quantities of water, ammonium persulfate as initiator, sodium dodecyl sulfate/sodium dioctyl sulfosuccinate as surfactants, methyl methacrylate and methacrylic acid as monomers with molar ratio of 2/1, respectively. The monomer solution was dosed dropwise during reaction at 70 °C with a mechanical stirring of 650 rpm. Nanoparticles of poly(methyl methacrylate-co-methacrylic acid) were loaded with acetylsalicylic acid (ASA, aspirin) by a chemical adsorption technique. The purified latex was put in contact with a solution of ASA in dichloromethane (DCM) at 0.1, 0.2, 0.4 or 0.6 wt-%, at 35°C during 12 hours. According to the boiling point of DCM, as well as DCM and water densities, the loading process is completed when the whole DCM is evaporated. The hydrodynamic diameter was measured after polymerization by quasi-elastic light scattering and transmission electron microscopy, before and after loading procedures with ASA. The quantitative and qualitative analyses of PNP loaded with ASA were measured by infrared spectroscopy, differential scattering calorimetry and thermogravimetric analysis. Also, the molar mass distributions of polymers were determined in a gel permeation chromatograph apparatus. The load capacity and efficiency were determined by gravimetric analysis. The hydrodynamic diameter results for methacrylic PNP without ASA showed a narrow distribution with an average particle size around 10 nm and a composition methyl methacrylate/methacrylic acid molar ratio equal to 2/1, same composition of Eudragit S100, which is a commercial compound widely used as excipient. Moreover, the latex was stabilized in a relative high solids content (around 11 %), a monomer conversion almost 95 % and a number molecular weight around 400 Kg/mol. The average particle size in the PNP/aspirin systems fluctuated between 18 and 24 nm depending on the initial percentage of aspirin in the loading process, being the drug content as high as 24 % with an efficiency loading of 36 %. These average sizes results have not been reported in the literature, thus, the methacrylic nanoparticles here reported are capable to be loaded with a considerable amount of ASA and be used as a drug carrier.Keywords: aspirin, biocompatibility, biodegradable, Eudragit S100, methacrylic nanoparticles
Procedia PDF Downloads 141457 Pioneering Conservation of Aquatic Ecosystems under Australian Law
Authors: Gina M. Newton
Abstract:
Australia’s Environment Protection and Biodiversity Conservation Act (EPBC Act) is the premiere, national law under which species and 'ecological communities' (i.e., like ecosystems) can be formally recognised and 'listed' as threatened across all jurisdictions. The listing process involves assessment against a range of criteria (similar to the IUCN process) to demonstrate conservation status (i.e., vulnerable, endangered, critically endangered, etc.) based on the best available science. Over the past decade in Australia, there’s been a transition from almost solely terrestrial to the first aquatic threatened ecological community (TEC or ecosystem) listings (e.g., River Murray, Macquarie Marshes, Coastal Saltmarsh, Salt-wedge Estuaries). All constitute large areas, with some including multiple state jurisdictions. Development of these conservation and listing advices has enabled, for the first time, a more forensic analysis of three key factors across a range of aquatic and coastal ecosystems: -the contribution of invasive species to conservation status, -how to demonstrate and attribute decline in 'ecological integrity' to conservation status, and, -identification of related priority conservation actions for management. There is increasing global recognition of the disproportionate degree of biodiversity loss within aquatic ecosystems. In Australia, legislative protection at Commonwealth or State levels remains one of the strongest conservation measures. Such laws have associated compliance mechanisms for breaches to the protected status. They also trigger the need for environment impact statements during applications for major developments (which may be denied). However, not all jurisdictions have such laws in place. There remains much opposition to the listing of freshwater systems – for example, the River Murray (Australia's largest river) and Macquarie Marshes (an internationally significant wetland) were both disallowed by parliament four months after formal listing. This was mainly due to a change of government, dissent from a major industry sector, and a 'loophole' in the law. In Australia, at least in the immediate to medium-term time frames, invasive species (aliens, native pests, pathogens, etc.) appear to be the number one biotic threat to the biodiversity and ecological function and integrity of our aquatic ecosystems. Consequently, this should be considered a current priority for research, conservation, and management actions. Another key outcome from this analysis was the recognition that drawing together multiple lines of evidence to form a 'conservation narrative' is a more useful approach to assigning conservation status. This also helps to addresses a glaring gap in long-term ecological data sets in Australia, which often precludes a more empirical data-driven approach. An important lesson also emerged – the recognition that while conservation must be underpinned by the best available scientific evidence, it remains a 'social and policy' goal rather than a 'scientific' goal. Communication, engagement, and 'politics' necessarily play a significant role in achieving conservation goals and need to be managed and resourced accordingly.Keywords: aquatic ecosystem conservation, conservation law, ecological integrity, invasive species
Procedia PDF Downloads 133456 Audio-Visual Co-Data Processing Pipeline
Authors: Rita Chattopadhyay, Vivek Anand Thoutam
Abstract:
Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech
Procedia PDF Downloads 80455 Criticality of Socio-Cultural Factors in Public Policy: A Study of Reproductive Health Care in Rural West Bengal
Authors: Arindam Roy
Abstract:
Public policy is an intriguing terrain, which involves complex interplay of administrative, social political and economic components. There is hardly any fit-for all formulation of public policy as Lindbloom has aptly categorized it as a science of muddling through. In fact, policies are both temporally and contextually determined as one the proponents of policy sciences Harold D Lasswell has underscored it in his ‘contextual-configurative analysis’ as early as 1950s. Though, a lot of theoretical efforts have been made to make sense of this intricate dynamics of policy making, at the end of the day the applied area of public policy negates any such uniform, planned and systematic formulation. However, our policy makers seem to have learnt very little of that. Until recently, policy making was deemed as an absolutely specialized exercise to be conducted by a cadre of professionally trained seasoned mandarin. Attributes like homogeneity, impartiality, efficiency, and neutrality were considered as the watchwords of delivering common goods. Citizen or clientele was conceptualized as universal political or economic construct, to be taken care of uniformly. Moreover, policy makers usually have the proclivity to put anything into straightjacket, and to ignore the nuances therein. Hence, least attention has been given to the ground level reality, especially the socio-cultural milieu where the policy is supposed to be applied. Consequently, a substantial amount of public money goes in vain as the intended beneficiaries remain indifferent to the delivery of public policies. The present paper in the light of Reproductive Health Care policy in rural West Bengal has tried to underscore the criticality of socio-cultural factors in public health delivery. Indian health sector has traversed a long way. From a near non-existent at the time of independence, the Indian state has gradually built a country-wide network of health infrastructure. Yet it has to make a major breakthrough in terms of coverage and penetration of the health services in the rural areas. Several factors are held responsible for such state of things. These include lack of proper infrastructure, medicine, communication, ambulatory services, doctors, nursing services and trained birth attendants. Policy makers have underlined the importance of supply side in policy formulation and implementation. The successive policy documents concerning health delivery bear the testimony of it. The present paper seeks to interrogate the supply-side oriented explanations for the failure of the delivery of health services. Instead, it identified demand side to find out the answer. The state-led and bureaucratically engineered public health measures fail to engender demands as these measures mostly ignore socio-cultural nuances of health and well-being. Hence, the hiatus between supply side and demand side leads to huge wastage of revenue as health infrastructure, medicine and instruments remain unutilized in most cases. Therefore, taking proper cognizance of these factors could have streamlined the delivery of public health.Keywords: context, policy, socio-cultural factor, uniformity
Procedia PDF Downloads 317454 A Greener Approach towards the Synthesis of an Antimalarial Drug Lumefantrine
Authors: Luphumlo Ncanywa, Paul Watts
Abstract:
Malaria is a disease that kills approximately one million people annually. Children and pregnant women in sub-Saharan Africa lost their lives due to malaria. Malaria continues to be one of the major causes of death, especially in poor countries in Africa. Decrease the burden of malaria and save lives is very essential. There is a major concern about malaria parasites being able to develop resistance towards antimalarial drugs. People are still dying due to lack of medicine affordability in less well-off countries in the world. If more people could receive treatment by reducing the cost of drugs, the number of deaths in Africa could be massively reduced. There is a shortage of pharmaceutical manufacturing capability within many of the countries in Africa. However one has to question how Africa would actually manufacture drugs, active pharmaceutical ingredients or medicines developed within these research programs. It is quite likely that such manufacturing would be outsourced overseas, hence increasing the cost of production and potentially limiting the full benefit of the original research. As a result the last few years has seen major interest in developing more effective and cheaper technology for manufacturing generic pharmaceutical products. Micro-reactor technology (MRT) is an emerging technique that enables those working in research and development to rapidly screen reactions utilizing continuous flow, leading to the identification of reaction conditions that are suitable for usage at a production level. This emerging technique will be used to develop antimalarial drugs. It is this system flexibility that has the potential to reduce both the time was taken and risk associated with transferring reaction methodology from research to production. Using an approach referred to as scale-out or numbering up, a reaction is first optimized within the laboratory using a single micro-reactor, and in order to increase production volume, the number of reactors employed is simply increased. The overall aim of this research project is to develop and optimize synthetic process of antimalarial drugs in the continuous processing. This will provide a step change in pharmaceutical manufacturing technology that will increase the availability and affordability of antimalarial drugs on a worldwide scale, with a particular emphasis on Africa in the first instance. The research will determine the best chemistry and technology to define the lowest cost manufacturing route to pharmaceutical products. We are currently developing a method to synthesize Lumefantrine in continuous flow using batch process as bench mark. Lumefantrine is a dichlorobenzylidine derivative effective for the treatment of various types of malaria. Lumefantrine is an antimalarial drug used with artemether for the treatment of uncomplicated malaria. The results obtained when synthesizing Lumefantrine in a batch process are transferred into a continuous flow process in order to develop an even better and reproducible process. Therefore, development of an appropriate synthetic route for Lumefantrine is significant in pharmaceutical industry. Consequently, if better (and cheaper) manufacturing routes to antimalarial drugs could be developed and implemented where needed, it is far more likely to enable antimalarial drugs to be available to those in need.Keywords: antimalarial, flow, lumefantrine, synthesis
Procedia PDF Downloads 204453 Re-Entrant Direct Hexagonal Phases in a Lyotropic System Induced by Ionic Liquids
Authors: Saheli Mitra, Ramesh Karri, Praveen K. Mylapalli, Arka. B. Dey, Gourav Bhattacharya, Gouriprasanna Roy, Syed M. Kamil, Surajit Dhara, Sunil K. Sinha, Sajal K. Ghosh
Abstract:
The most well-known structures of lyotropic liquid crystalline systems are the two dimensional hexagonal phase of cylindrical micelles with a positive interfacial curvature and the lamellar phase of flat bilayers with zero interfacial curvature. In aqueous solution of surfactants, the concentration dependent phase transitions have been investigated extensively. However, instead of changing the surfactant concentrations, the local curvature of an aggregate can be altered by tuning the electrostatic interactions among the constituent molecules. Intermediate phases with non-uniform interfacial curvature are still unexplored steps to understand the route of phase transition from hexagonal to lamellar. Understanding such structural evolution in lyotropic liquid crystalline systems is important as it decides the complex rheological behavior of the system, which is one of the main interests of the soft matter industry. Sodium dodecyl sulfate (SDS) is an anionic surfactant and can be considered as a unique system to tune the electrostatics by cationic additives. In present study, imidazolium-based ionic liquids (ILs) with different number of carbon atoms in their single hydrocarbon chain were used as the additive in the aqueous solution of SDS. At a fixed concentration of total non-aqueous components (SDS and IL), the molar ratio of these components was changed, which effectively altered the electrostatic interactions between the SDS molecules. As a result, the local curvature is observed to modify, and correspondingly, the structure of the hexagonal liquid crystalline phases are transformed into other phases. Polarizing optical microscopy of SDS and imidazole-based-IL systems have exhibited different textures of the liquid crystalline phases as a function of increasing concentration of the ILs. The small angle synchrotron x-ray diffraction (SAXD) study has indicated the hexagonal phase of direct cylindrical micelles to transform to a rectangular phase at the presence of short (two hydrocarbons) chain IL. However, the hexagonal phase is transformed to a lamellar phase at the presence of long (ten hydrocarbons) chain IL. Interestingly, at the presence of a medium (four hydrocarbons) chain IL, the hexagonal phase is transformed to another hexagonal phase of direct cylindrical micelles through the lamellar phase. To the best of our knowledge, such a phase sequence has not been reported earlier. Even though the small angle x-ray diffraction study has revealed the lattice parameters of these phases to be similar to each other, their rheological behavior has been distinctly different. These rheological studies have shed lights on how these phases differ in their viscoelastic behavior. Finally, the packing parameters, calculated for these phases based on the geometry of the aggregates, have explained the formation of the self-assembled aggregates.Keywords: lyotropic liquid crystals, polarizing optical microscopy, rheology, surfactants, small angle x-ray diffraction
Procedia PDF Downloads 140452 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping
Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello
Abstract:
Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration
Procedia PDF Downloads 168451 Teaching English for Children in Public Schools Can Work in Egypt
Authors: Shereen Kamel
Abstract:
This study explores the recent application of bilingual education in Egyptian public schools. It aims to provide an overall picture of bilingual education programs globally and examine its adequacy to the Egyptian social and cultural context. The study also assesses the current application process of teaching English as a Second Language in public schools from the early childhood education stage and onwards, instead of starting it from middle school; as a strategy that promotes English language proficiency and equity among students. The theoretical framework is based on Jim Cummins’ bilingual education theories and on recent trends adopting different developmental theories and perspectives, like Stephen Crashen’s theory of Second Language Acquisition that calls for communicative and meaningful interaction rather than memorization of grammatical rules. The question posed here is whether bilingual education, with its peculiar nature, could be a good chance to reach out to all Egyptian students and prepare them to become global citizens. In addition to this, a more specific question is related to the extent to which social and cultural variables can affect the young learners’ second language acquisition. This exploratory analytical study uses mixed-methods research design to examine the application of bilingual education in Egyptian public schools. The study uses a cluster sample of schools in Egypt from different social and cultural backgrounds to assess the determining variables. The qualitative emphasis is on interviewing teachers and reviewing students’ achievement documents. The quantitative aspect is based on observations of in-class activities through tally sheets and checklists. Having access to schools and documents is authorized by governmental and institutional research bodies. Data sources will comprise achievement records, students’ portfolios, parents’ feedback and teachers’ viewpoints. Triangulation and SPSS will be used for analysis. Based on the gathered data, new curricula have been assigned for elementary grades and teachers have been required to teach the newly developed materials all of a sudden without any prior training. Due to shortage in the teaching force, many assigned teachers have not been proficient in the English language. Hence, teachers’ incompetency and unpreparedness to teach this grade specific curriculum constitute a great challenge in the implementation phase. Nevertheless, the young learners themselves as well as their parents seem to be enthusiastic about the idea itself. According to the findings of this research study, teaching English as a Second Language to children in public schools can be applicable and is culturally relevant to the Egyptian context. However, there might be some social and cultural differences and constraints when it comes to application in addition to various aspects regarding teacher preparation. Therefore, a new mechanism should be incorporated to overcome these challenges for better results. Moreover, a new paradigm shift in these teacher development programs is direly needed. Furthermore, ongoing support and follow up are crucial to help both teachers and students realize the desired outcomes.Keywords: bilingual education, communicative approach, early childhood education, language and culture, second language acquisition
Procedia PDF Downloads 120450 Ammonia Bunkering Spill Scenarios: Modelling Plume’s Behaviour and Potential to Trigger Harmful Algal Blooms in the Singapore Straits
Authors: Bryan Low
Abstract:
In the coming decades, the global maritime industry will face a most formidable environmental challenge -achieving net zero carbon emissions by 2050. To meet this target, the Maritime Port Authority of Singapore (MPA) has worked to establish green shipping and digital corridors with ports of several other countries around the world where ships will use low-carbon alternative fuels such as ammonia for power generation. While this paradigm shift to the bunkering of greener fuels is encouraging, fuels like ammonia will also introduce a new and unique type of environmental risk in the unlikely scenario of a spill. While numerous modelling studies have been conducted for oil spills and their associated environmental impact on coastal and marine ecosystems, ammonia spills are comparatively less well understood. For example, there is a knowledge gap regarding how the complex hydrodynamic conditions of the Singapore Straits may influence the dispersion of a hypothetical ammonia plume, which has different physical and chemical properties compared to an oil slick. Chemically, ammonia can be absorbed by phytoplankton, thus altering the balance of the marine nitrogen cycle. Biologically, ammonia generally serves the role of a nutrient in coastal ecosystems at lower concentrations. However, at higher concentrations, it has been found to be toxic to many local species. It may also have the potential to trigger eutrophication and harmful algal blooms (HABs) in coastal waters, depending on local hydrodynamic conditions. Thus, the key objective of this research paper is to support the development of a model-based forecasting system that can predict ammonia plume behaviour in coastal waters, given prevailing hydrodynamic conditions and their environmental impact. This will be essential as ammonia bunkering becomes more commonplace in Singapore’s ports and around the world. Specifically, this system must be able to assess the HAB-triggering potential of an ammonia plume, as well as its lethal and sub-lethal toxic effects on local species. This will allow the relevant authorities to better plan risk mitigation measures or choose a time window with the ideal hydrodynamic conditions to conduct ammonia bunkering operations with minimal risk. In this paper, we present the first part of such a forecasting system: a jointly coupled hydrodynamic-water quality model that can capture how advection-diffusion processes driven by ocean currents influence plume behaviour and how the plume interacts with the marine nitrogen cycle. The model is then applied to various ammonia spill scenarios where the results are discussed in the context of current ammonia toxicity guidelines, impact on local ecosystems, and mitigation measures for future bunkering operations conducted in the Singapore Straits.Keywords: ammonia bunkering, forecasting, harmful algal blooms, hydrodynamics, marine nitrogen cycle, oceanography, water quality modeling
Procedia PDF Downloads 83449 Monocoque Systems: The Reuniting of Divergent Agencies for Wood Construction
Authors: Bruce Wrightsman
Abstract:
Construction and design are inexorably linked. Traditional building methodologies, including those using wood, comprise a series of material layers differentiated and separated from each other. This results in the separation of two agencies of building envelope (skin) separate from the structure. However, from a material performance position reliant on additional materials, this is not an efficient strategy for the building. The merits of traditional platform framing are well known. However, its enormous effectiveness within wood-framed construction has seldom led to serious questioning and challenges in defining what it means to build. There are several downsides of using this method, which is less widely discussed. The first and perhaps biggest downside is waste. Second, its reliance on wood assemblies forming walls, floors and roofs conventionally nailed together through simple plate surfaces is structurally inefficient. It requires additional material through plates, blocking, nailers, etc., for stability that only adds to the material waste. In contrast, when we look back at the history of wood construction in airplane and boat manufacturing industries, we will see a significant transformation in the relationship of structure with skin. The history of boat construction transformed from indigenous wood practices of birch bark canoes to copper sheathing over wood to improve performance in the late 18th century and the evolution of merged assemblies that drives the industry today. In 1911, Swiss engineer Emile Ruchonnet designed the first wood monocoque structure for an airplane called the Cigare. The wing and tail assemblies consisted of thin, lightweight, and often fabric skin stretched tightly over a wood frame. This stressed skin has evolved into semi-monocoque construction, in which the skin merges with structural fins that take additional forces. It provides even greater strength with less material. The monocoque, which translates to ‘mono or single shell,’ is a structural system that supports loads and transfers them through an external enclosure system. They have largely existed outside the domain of architecture. However, this uniting of divergent systems has been demonstrated to be lighter, utilizing less material than traditional wood building practices. This paper will examine the role monocoque systems have played in the history of wood construction through lineage of boat and airplane building industries and its design potential for wood building systems in architecture through a case-study examination of a unique wood construction approach. The innovative approach uses a wood monocoque system comprised of interlocking small wood members to create thin shell assemblies for the walls, roof and floor, increasing structural efficiency and wasting less than 2% of the wood. The goal of the analysis is to expand the work of practice and the academy in order to foster deeper, more honest discourse regarding the limitations and impact of traditional wood framing.Keywords: wood building systems, material histories, monocoque systems, construction waste
Procedia PDF Downloads 79448 Addressing Housing Issue at Regional Level Planning: A Case Study of Mumbai Metropolitan Region
Authors: Bhakti Chitale
Abstract:
Mumbai city, which is the business capital of India and one of the most crowded cities in the world, holds the biggest slum in Asia. The Mumbai Metropolitan Region (MMR) occupies an area of 4035 sq.km. with a population of 22.8 million people. This population is mostly urban with 91% of this population living in areas of Municipal Corporations and Councils. Another 3% live in Census Towns. The region has 9 Municipal Corporations, 8 Municipal councils, and around 1000 villages. On the one hand MMR reflects the highest contribution to the Nations overall economy and on the other hand it shows the horrible and intolerable picture of about 2 million people, who are living in slums/without even slum with totally unhygienic conditions and with total loss of hope. The generations are about to get affected adversely if the solution is not worked out. This study is an attempt towards working out the solution. Mumbai Metropolitan Region Development Authority (MMRDA) is state government's authority, specially formed to govern the development of MMR. MMRDA is engaged in long term planning, promotion of new growth centres, implementation of strategic projects and financing infrastructure development. While preparing the master plan for MMR for next 20 years MMRDA conducted a detail study regarding Housing scenario in MMR and possible options for improvement. The author was the in charge officer for the said assignment. This paper puts light on the interesting outcomes of the research study, which ranges from the adverse effects of government policies, automatic responses of housing market, effects on planning processes, and overall changing needs of housing patterns in the world due to changes in the social mechanism. It alarms the urban planners who usually focus on smart infrastructure development, about allied future dangers. This housing study will explain the complexities, realities and needs of innovations in the housing policies all over the world. The paper will explain further few success stories and failure stories of government initiatives with reasons. It gives the clear idea about the differences in needs of housing for people from different economic groups and direct and indirect market pressures on low cost housing. Magical phenomenon came in front like a large percentage of vacant houses is present in spite of the huge need. Housing market gets affected by the developments or any other physical and financial changes taking place in the nearby areas or cities, also by changes in cities which are located far from the region and also by the international investments or policy changes. Instead of just depending on governments actions in case of generation of affordable housing, it becomes equally important to make the housing markets automatically generate such stock and still make them sustainable is the aim of all the movement. In summary, we may say that the paper will sequentially elaborate the complete dynamics of housing in one of the most crowded urban area in the world that is Mumbai Metropolitan Region, with a lot of data, analysis, case studies, and recommendations.Keywords: Mumbai India, slum housing, region planning, market recommendations
Procedia PDF Downloads 281447 Transformative Economic Policies in India: A Political Economy Analysis of IMF Influence, Sectoral Shifts, and Political Transitions
Authors: Vrajesh Rawal
Abstract:
India's economic landscape has witnessed significant transformations over the past decades, characterized by shifts from agrarian to service-oriented economies. Recently, there has been a growing emphasis on transitioning towards a manufacturing-led growth model driven by factors such as demographic changes, technological advancements, and evolving global trade dynamics. These changes reflect broader efforts to enhance industrialization, boost employment opportunities, and diversify the economic base beyond traditional sectors. Within this context, this research focuses on understanding the specific drivers and dynamics behind India's shift from a predominantly service-based economy to one centered on manufacturing. It seeks to explore how political ideologies influence economic policies and shape sectoral priorities, with a particular focus on contrasting approaches between the Indian National Congress (INC) and the Bharatiya Janata Party (BJP). Additionally, the study evaluates the alignment of IMF policy recommendations with India's economic goals and priorities within the theoretical frameworks of neoliberalism and political economy theory. Despite the extensive literature on India's economic reforms and political economy, there remains a gap in understanding how political ideology influences sectoral shifts and economic policy outcomes, particularly in the context of IMF recommendations. Existing studies often focus narrowly on either political ideologies or economic reforms without fully integrating both perspectives. This research aims to bridge this gap by providing a comprehensive analysis that integrates political economy theories with empirical evidence from political speeches, government documents, and IMF reports. Through qualitative content analysis of speeches by political leaders, document analysis of key governmental documents, and scrutiny of party manifestos, this research demonstrates how political ideologies translate into distinct economic strategies and developmental agendas. It highlights the extent to which IMF policy prescriptions align with India's economic objectives and how these interactions shape broader socio-economic outcomes. The theoretical framework of neoliberalism and political economy theory provides a lens to interpret these findings, offering insights into the complex interplay between economic policies, political ideologies, and institutional frameworks in India. The findings of this study are expected to provide valuable insights for policymakers, researchers, and practitioners involved in economic governance and development planning in India. By understanding the factors driving sectoral shifts and the influence of political ideologies on economic policies, policymakers can make informed decisions to foster sustainable economic growth and development. Implementation of these insights could contribute to refining policy frameworks, enhancing alignment with national development priorities, and optimizing engagement with international financial institutions like the IMF to better meet India's socio-economic challenges and opportunities in the evolving global context.Keywords: political economy, international politics, social science, policy analysis
Procedia PDF Downloads 34446 Assessing of Social Comfort of the Russian Population with Big Data
Authors: Marina Shakleina, Konstantin Shaklein, Stanislav Yakiro
Abstract:
The digitalization of modern human life over the last decade has facilitated the acquisition, storage, and processing of data, which are used to detect changes in consumer preferences and to improve the internal efficiency of the production process. This emerging trend has attracted academic interest in the use of big data in research. The study focuses on modeling the social comfort of the Russian population for the period 2010-2021 using big data. Big data provides enormous opportunities for understanding human interactions at the scale of society with plenty of space and time dynamics. One of the most popular big data sources is Google Trends. The methodology for assessing social comfort using big data involves several steps: 1. 574 words were selected based on the Harvard IV-4 Dictionary adjusted to fit the reality of everyday Russian life. The set of keywords was further cleansed by excluding queries consisting of verbs and words with several lexical meanings. 2. Search queries were processed to ensure comparability of results: the transformation of data to a 10-point scale, elimination of popularity peaks, detrending, and deseasoning. The proposed methodology for keyword search and Google Trends processing was implemented in the form of a script in the Python programming language. 3. Block and summary integral indicators of social comfort were constructed using the first modified principal component resulting in weighting coefficients values of block components. According to the study, social comfort is described by 12 blocks: ‘health’, ‘education’, ‘social support’, ‘financial situation’, ‘employment’, ‘housing’, ‘ethical norms’, ‘security’, ‘political stability’, ‘leisure’, ‘environment’, ‘infrastructure’. According to the model, the summary integral indicator increased by 54% and was 4.631 points; the average annual rate was 3.6%, which is higher than the rate of economic growth by 2.7 p.p. The value of the indicator describing social comfort in Russia is determined by 26% by ‘social support’, 24% by ‘education’, 12% by ‘infrastructure’, 10% by ‘leisure’, and the remaining 28% by others. Among 25% of the most popular searches, 85% are of negative nature and are mainly related to the blocks ‘security’, ‘political stability’, ‘health’, for example, ‘crime rate’, ‘vulnerability’. Among the 25% most unpopular queries, 99% of the queries were positive and mostly related to the blocks ‘ethical norms’, ‘education’, ‘employment’, for example, ‘social package’, ‘recycling’. In conclusion, the introduction of the latent category ‘social comfort’ into the scientific vocabulary deepens the theory of the quality of life of the population in terms of the study of the involvement of an individual in the society and expanding the subjective aspect of the measurements of various indicators. Integral assessment of social comfort demonstrates the overall picture of the development of the phenomenon over time and space and quantitatively evaluates ongoing socio-economic policy. The application of big data in the assessment of latent categories gives stable results, which opens up possibilities for their practical implementation.Keywords: big data, Google trends, integral indicator, social comfort
Procedia PDF Downloads 203445 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts
Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig
Abstract:
This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.Keywords: expert interview, hazard management, modeling, simulation, snow avalanche
Procedia PDF Downloads 327444 The Power-Knowledge Relationship in the Italian Education System between the 19th and 20th Century
Authors: G. Iacoviello, A. Lazzini
Abstract:
This paper focuses on the development of the study of accounting in the Italian education system between the 19th and 20th centuries. It also focuses on the subsequent formation of a scientific and experimental forma mentis that would prepare students for administrative and managerial activities in industry, commerce and public administration. From a political perspective, the period was characterized by two dominant movements - liberalism (1861-1922) and fascism (1922-1945) - that deeply influenced accounting practices and the entire Italian education system. The materials used in the study include both primary and secondary sources. The primary sources used to inform this study are numerous original documents issued from 1890-1935 by the government and maintained in the Historical Archive of the State in Rome. The secondary sources have supported both the development of the theoretical framework and the definition of the historical context. This paper assigns to the educational system the role of cultural producer. Foucauldian analysis identifies the problem confronted by the critical intellectual in finding a way to deploy knowledge through a 'patient labour of investigation' that highlights the contingency and fragility of the circumstances that have shaped current practices and theories. Education can be considered a powerful and political process providing students with values, ideas, and models that they will subsequently use to discipline themselves, remaining as close to them as possible. It is impossible for power to be exercised without knowledge, just as it is impossible for knowledge not to engender power. The power-knowledge relationship can be usefully employed for explaining how power operates within society, how mechanisms of power affect everyday lives. Power is employed at all levels and through many dimensions including government. Schools exercise ‘epistemological power’ – a power to extract a knowledge of individuals from individuals. Because knowledge is a key element in the operation of power, the procedures applied to the formation and accumulation of knowledge cannot be considered neutral instruments for the presentation of the real. Consequently, the same institutions that produce and spread knowledge can be considered part of the ‘power-knowledge’ interrelation. Individuals have become both objects and subject in the development of knowledge. If education plays a fundamental role in shaping all aspects of communities in the same way, the structural changes resulting from economic, social and cultural development affect the educational systems. Analogously, the important changes related to social and economic development required legislative intervention to regulate the functioning of different areas in society. Knowledge can become a means of social control used by the government to manage populations. It can be argued that the evolution of Italy’s education systems is coherent with the idea that power and knowledge do not exist independently but instead are coterminous. This research aims to reduce such a gap by analysing the role of the state in the development of accounting education in Italy.Keywords: education system, government, knowledge, power
Procedia PDF Downloads 140443 Effect of Electric Arc Furnace Coarse Slag Aggregate And Ground Granulated Blast Furnace Slag on Mechanical and Durability Properties of Roller Compacted Concrete Pavement
Authors: Amiya Kumar Thakur, Dinesh Ganvir, Prem Pal Bansal
Abstract:
Industrial by product utilization has been encouraged due to environment and economic factors. Since electric arc furnace slag aggregate is a by-product of steel industry and its storage is a major concern hence it can be used as a replacement of natural aggregate as its physical and mechanical property are comparable or better than the natural aggregates. The present study investigates the effect of partial and full replacement of natural coarse aggregate with coarse EAF slag aggregate and partial replacement of cement with ground granulated blast furnace slag (GGBFS) on the mechanical and durability properties of roller compacted concrete pavement (RCCP).The replacement level of EAF slag aggregate were at five levels (i.e. 0% ,25% ,50%,75% & 100%) and of GGBFS was (0 % & 30%).The EAF slag aggregate was stabilized by exposing to outdoor condition for several years and the volumetric expansion test using steam exposure device was done to check volume stability. Soil compaction method was used for mix proportioning of RCCP. The fresh properties of RCCP investigated were fresh density and modified vebe test was done to measure the consistency of concrete. For investigating the mechanical properties various tests were done at 7 and 28 days (i.e. Compressive strength, split tensile strength, flexure strength modulus of elasticity) and also non-destructive testing was done at 28 days (i.e. Ultra pulse velocity test (UPV) & rebound hammer test). The durability test done at 28 days were water absorption, skid resistance & abrasion resistance. The results showed that with the increase in slag aggregate percentage there was an increase in the fresh density of concrete and also slight increase in the vebe time but with the 30 % GGBFS replacement the vebe time decreased and the fresh density was comparable to 0% GGBFS mix. The compressive strength, split tensile strength, flexure strength & modulus of elasticity increased with the increase in slag aggregate percentage in concrete when compared to control mix. But with the 30 % GGBFS replacement there was slight decrease in mechanical properties when compared to 100 % cement concrete. In UPV test and rebound hammer test all the mixes showed excellent quality of concrete. With the increase in slag aggregate percentage in concrete there was an increase in water absorption, skid resistance and abrasion resistance but with the 30 % GGBFS percentage the skid resistance, water absorption and abrasion resistance decreased when compared to 100 % cement concrete. From the study it was found that the mix containing 30 % GGBFS with different percentages of EAF slag aggregate were having comparable results for all the mechanical and durability property when compared to 100 % cement mixes. Hence 30 % GGBFS can be used as cement replacement with 100 % EAF slag aggregate as natural coarse aggregate replacement.Keywords: durability properties, electric arc furnace slag aggregate, GGBFS, mechanical properties, roller compacted concrete pavement, soil compaction method
Procedia PDF Downloads 147442 Using Business Interactive Games to Improve Management Skills
Authors: Nuno Biga
Abstract:
Continuous processes’ improvement is a permanent challenge for managers of any organization. Lean management means that efficiency gains can be obtained through a systematic framework able to explore synergies between processes, eliminate waste of time, and other resources. Leaderships in organizations determine the efficiency of the teams through their influence on collaborators, their motivation, and consolidation of ownership (group) feeling. The “organization health” depends on the leadership style, which is directly influenced by the intrinsic characteristics of each personality and leadership ability (leadership competencies). Therefore, it’s important that managers can correct in advance any deviation from expected leadership exercises. Top management teams must assume themselves as regulatory agents of leadership within the organization, ensuring monitoring of actions and the alignment of managers in accordance with the humanist standards anchored in a visible Code of Ethics and Conduct. This article is built around an innovative model of “Business Interactive Games” (BI GAMES) that simulates a real-life management environment. It shows that the strategic management of operations depends on a complex set of endogenous and exogenous variables to the intervening agents that require specific skills and a set of critical processes to monitor. BI GAMES are designed for each management reality and have already been applied successfully in several contexts over the last five years comprising the educational and enterprise ones. Results from these experiences are used to demonstrate how serious games in working living labs contributed to improve the organizational environment by focusing on the evaluation of players’ (agents’) skills, empower its capabilities, and the critical factors that create value in each context. The implementation of the BI GAMES simulator highlights that leadership skills are decisive for the performance of teams, regardless of the sector of activity and the specificities of each organization whose operation is intended to simulate. The players in the BI GAMES can be managers or employees of different roles in the organization or students in the learning context. They interact with each other and are asked to decide/make choices in the presence of several options for the follow-up operation, for example, when the costs and benefits are not fully known but depend on the actions of external parties (e.g., subcontracted enterprises and actions of regulatory bodies). Each team must evaluate resources used/needed in each operation, identify bottlenecks in the system of operations, assess the performance of the system through a set of key performance indicators, and set a coherent strategy to improve efficiency. Through the gamification and the serious games approach, organizational managers will be able to confront the scientific approach in strategic decision-making versus their real-life approach based on experiences undertaken. Considering that each BI GAME’s team has a leader (chosen by draw), the performance of this player has a direct impact on the results obtained. Leadership skills are thus put to the test during the simulation of the functioning of each organization, allowing conclusions to be drawn at the end of the simulation, including its discussion amongst participants.Keywords: business interactive games, gamification, management empowerment skills, simulation living labs
Procedia PDF Downloads 113441 Determination of Gross Alpha and Gross Beta Activity in Water Samples by iSolo Alpha/Beta Counting System
Authors: Thiwanka Weerakkody, Lakmali Handagiripathira, Poshitha Dabare, Thisari Guruge
Abstract:
The determination of gross alpha and beta activity in water is important in a wide array of environmental studies and these parameters are considered in international legislations on the quality of water. This technique is commonly applied as screening method in radioecology, environmental monitoring, industrial applications, etc. Measuring of Gross Alpha and Beta emitters by using iSolo alpha beta counting system is an adequate nuclear technique to assess radioactivity levels in natural and waste water samples due to its simplicity and low cost compared with the other methods. Twelve water samples (Six samples of commercially available bottled drinking water and six samples of industrial waste water) were measured by standard method EPA 900.0 consisting of the gas-less, firm wear based, single sample, manual iSolo alpha beta counter (Model: SOLO300G) with solid state silicon PIPS detector. Am-241 and Sr90/ Y90 calibration standards were used to calibrate the detector. The minimum detectable activities are 2.32mBq/L and 406mBq/L, for alpha and beta activity, respectively. Each of the 2L water samples was evaporated (at low heat) to a small volume and transferred into 50mm stainless steel counting planchet evenly (for homogenization) and heated by IR lamp and the constant weighted residue was obtained. Then the samples were counted for gross alpha and beta. Sample density on the planchet area was maintained below 5mg/cm. Large quantities of solid wastes sludges and waste water are generated every year due to various industries. This water can be reused for different applications. Therefore implementation of water treatment plants and measuring water quality parameters in industrial waste water discharge is very important before releasing them into the environment. This waste may contain different types of pollutants, including radioactive substances. All these measured waste water samples having gross alpha and beta activities, lower than the maximum tolerance limits for industrial waste water discharge of industrial waste in to inland surface water, that is 10-9µCi/mL and 10-8µCi/mL for gross alpha and beta respectively (National Environmental Act, No. 47 of 1980). This is according to extraordinary gazette of the democratic socialist republic of Sri Lanka in February 2008. The measured water samples were below the recommended radioactivity levels and do not pose any radiological hazard when releasing the environment. Drinking water is an essential requirement of life. All the drinking water samples were below the permissible levels of 0.5Bq/L for gross alpha activity and 1Bq/L for gross beta activity. The values have been proposed by World Health Organization in 2011; therefore the water is acceptable for consumption of humans without any further clarification with respect to their radioactivity. As these screening levels are very low, the individual dose criterion (IDC) would usually not be exceeded (0.1mSv y⁻¹). IDC is a criterion for evaluating health risks from long term exposure to radionuclides in drinking water. Recommended level of 0.1mSv/y expressed a very low level of health risk. This monitoring work will be continued further for environmental protection purposes.Keywords: drinking water, gross alpha, gross beta, waste water
Procedia PDF Downloads 198440 The Duty of Sea Carrier to Transship the Cargo in Case of Vessel Breakdown
Authors: Mojtaba Eshraghi Arani
Abstract:
Concluding the contract for carriage of cargo with the shipper (through bill of lading or charterparty), the carrier must transport the cargo from loading port to the port of discharge and deliver it to the consignee. Unless otherwise agreed in the contract, the carrier must avoid from any deviation, transfer of cargo to another vessel or unreasonable stoppage of carriage in-transit. However, the vessel might break down in-transit for any reason and becomes unable to continue its voyage to the port of discharge. This is a frequent incident in the carriage of goods by sea which leads to important dispute between the carrier/owner and the shipper/charterer (hereinafter called “cargo interests”). It is a generally accepted rule that in such event, the carrier/owner must repair the vessel after which it will continue its voyage to the destination port. The dispute will arise in the case that temporary repair of the vessel cannot be done in the short or reasonable term. There are two options for the contract parties in such a case: First, the carrier/owner is entitled to repair the vessel while having the cargo onboard or discharged in the port of refugee, and the cargo interests must wait till the breakdown is rectified at any time, whenever. Second, the carrier/owner will be responsible to charter another vessel and transfer the entirety of cargo to the substitute vessel. In fact, the main question revolves around the duty of carrier/owner to perform transfer of cargo to another vessel. Such operation which is called “trans-shipment” or “transhipment” (in terms of the oil industry it is usually called “ship-to-ship” or “STS”) needs to be done carefully and with due diligence. In fact, the transshipment operation for various cargoes might be different as each cargo requires its own suitable equipment for transfer to another vessel, so this operation is often costly. Moreover, there is a considerable risk of collision between two vessels in particular in bulk carriers. Bulk cargo is also exposed to the shortage and partial loss in the process of transshipment especially during bad weather. Concerning tankers which carry oil and petrochemical products, transshipment, is most probably followed by sea pollution. On the grounds of the above consequences, the owners are afraid of being held responsible for such operation and are reluctant to perform in the relevant disputes. The main argument raised by them is that no regulation has recognized such duty upon their shoulders so any such operation must be done under the auspices of the cargo interests and all costs must be reimbursed by themselves. Unfortunately, not only the international conventions including Hague rules, Hague-Visby Rules, Hamburg rules and Rotterdam rules but also most domestic laws are silent in this regard. The doctrine has yet to analyse the issue and no legal researches was found out in this regard. A qualitative method with the concept of interpretation of data collection has been used in this paper. The source of the data is the analysis of regulations and cases. It is argued in this article that the paramount rule in the maritime law is “the accomplishment of the voyage” by the carrier/owner in view of which, if the voyage can only be finished by transshipment, then the carrier/owner will be responsible to carry out this operation. The duty of carrier/owner to apply “due diligence” will strengthen this reasoning. Any and all costs and expenses will also be on the account pf the owner/carrier, unless the incident is attributable to any cause arising from the cargo interests’ negligence.Keywords: cargo, STS, transshipment, vessel, voyage
Procedia PDF Downloads 121439 Engineering Packaging for a Sustainable Food Chain
Authors: Ezekiel Olukayode Akintunde
Abstract:
There is a high level of inadequate methods at all levels of food supply in the global food industry. The inadequacies have led to vast wastages of food. Hence there is a need to curb the wastages that can later affect natural resources, water resources, and energy to avoid negative impacts on the climate and the environment. There is a need to engage multifaceted engineering packaging approaches for a sustainable food chain to ensure active packaging, intelligent packaging, new packaging materials, and a sustainable packaging system. Packaging can be regarded as an indispensable component approach that can be applied to solve major problems of sustainable food consumption globally; this is about controlling the environmental impact of packed food. The creative innovation will ensure that packaged foods are free from food-borne diseases and food chemical pollution. This paper evaluates the key shortcomings that must be addressed by innovative food packaging to ensure a safe, natural environment that will preserve energy and sustain water resources. Certain solutions, including fabricating microbial biodegradable chemical compounds/polymers from agro-food waste remnants, appear a bright path to ensure a strong and innovative waste-based food packaging system. Over the years, depletion in the petroleum reserves has brought about the emergence of biodegradable polymers as a proper replacement for traditional plastics; moreover, the increase in the production of traditional plastics has raised serious concerns about environmental threats. Biodegradable polymers have proven to be biocompatible, which can also be processed for other useful applications. Therefore, this study will showcase a workable guiding framework for designing a sustainable food packaging system that will not constitute a danger to our present society and that will surely preserve natural water resources. Various assessment methods will be deployed at different stages of the packaging design to enhance the package's sustainability. Every decision that will be made must be facilitated with methods that will be engaged per stage to allow for corrective measures throughout the cycle of the design process. Basic performance appraisal of packaging innovations. Food wastage can result in inimical environmental impacts, and ethical practices must be carried out for food loss at home. An examination in West Africa quantified preventable food wastage over the entire food value chain at almost 180kg per person per year. That is preventable food wastage, 35% of which originated at the household level. Many food losses reported, which happened at the harvesting, storage, transportation, and processing stages, are not preventable and are without much environmental impact because such wastage can be used for feeding. Other surveys have shown that 15%-20% of household food losses can be traced to food packaging. Therefore, new innovative packaging systems can lessen the environmental effect of food wastage to extend shelf‐life to lower food loss in the process distribution chain and at the household level.Keywords: food packaging, biodegradable polymer, intelligent packaging, shelf-life
Procedia PDF Downloads 58438 Design, Control and Implementation of 300Wp Single Phase Photovoltaic Micro Inverter for Village Nano Grid Application
Authors: Ramesh P., Aby Joseph
Abstract:
Micro Inverters provide Module Embedded Solution for harvesting energy from small-scale solar photovoltaic (PV) panels. In addition to higher modularity & reliability (25 years of life), the MicroInverter has inherent advantages such as avoidance of long DC cables, eliminates module mismatch losses, minimizes partial shading effect, improves safety and flexibility in installations etc. Due to the above-stated benefits, the renewable energy technology with Solar Photovoltaic (PV) Micro Inverter becomes more widespread in Village Nano Grid application ensuring grid independence for rural communities and areas without access to electricity. While the primary objective of this paper is to discuss the problems related to rural electrification, this concept can also be extended to urban installation with grid connectivity. This work presents a comprehensive analysis of the power circuit design, control methodologies and prototyping of 300Wₚ Single Phase PV Micro Inverter. This paper investigates two different topologies for PV Micro Inverters, based on the first hand on Single Stage Flyback/ Forward PV Micro-Inverter configuration and the other hand on the Double stage configuration including DC-DC converter, H bridge DC-AC Inverter. This work covers Power Decoupling techniques to reduce the input filter capacitor size to buffer double line (100 Hz) ripple energy and eliminates the use of electrolytic capacitors. The propagation of the double line oscillation reflected back to PV module will affect the Maximum Power Point Tracking (MPPT) performance. Also, the grid current will be distorted. To mitigate this issue, an independent MPPT control algorithm is developed in this work to reject the propagation of this double line ripple oscillation to PV side to improve the MPPT performance and grid side to improve current quality. Here, the power hardware topology accepts wide input voltage variation and consists of suitably rated MOSFET switches, Galvanically Isolated gate drivers, high-frequency magnetics and Film capacitors with a long lifespan. The digital controller hardware platform inbuilt with the external peripheral interface is developed using floating point microcontroller TMS320F2806x from Texas Instruments. The firmware governing the operation of the PV Micro Inverter is written in C language and was developed using code composer studio Integrated Development Environment (IDE). In this work, the prototype hardware for the Single Phase Photovoltaic Micro Inverter with Double stage configuration was developed and the comparative analysis between the above mentioned configurations with experimental results will be presented.Keywords: double line oscillation, micro inverter, MPPT, nano grid, power decoupling
Procedia PDF Downloads 136437 Theorizing Optimal Use of Numbers and Anecdotes: The Science of Storytelling in Newsrooms
Authors: Hai L. Tran
Abstract:
When covering events and issues, the news media often employ both personal accounts as well as facts and figures. However, the process of using numbers and narratives in the newsroom is mostly operated through trial and error. There is a demonstrated need for the news industry to better understand the specific effects of storytelling and data-driven reporting on the audience as well as explanatory factors driving such effects. In the academic world, anecdotal evidence and statistical evidence have been studied in a mutually exclusive manner. Existing research tends to treat pertinent effects as though the use of one form precludes the other and as if a tradeoff is required. Meanwhile, narratives and statistical facts are often combined in various communication contexts, especially in news presentations. There is value in reconceptualizing and theorizing about both relative and collective impacts of numbers and narratives as well as the mechanism underlying such effects. The current undertaking seeks to link theory to practice by providing a complete picture of how and why people are influenced by information conveyed through quantitative and qualitative accounts. Specifically, the cognitive-experiential theory is invoked to argue that humans employ two distinct systems to process information. The rational system requires the processing of logical evidence effortful analytical cognitions, which are affect-free. Meanwhile, the experiential system is intuitive, rapid, automatic, and holistic, thereby demanding minimum cognitive resources and relating to the experience of affect. In certain situations, one system might dominate the other, but rational and experiential modes of processing operations in parallel and at the same time. As such, anecdotes and quantified facts impact audience response differently and a combination of data and narratives is more effective than either form of evidence. In addition, the present study identifies several media variables and human factors driving the effects of statistics and anecdotes. An integrative model is proposed to explain how message characteristics (modality, vividness, salience, congruency, position) and individual differences (involvement, numeracy skills, cognitive resources, cultural orientation) impact selective exposure, which in turn activates pertinent modes of processing, and thereby induces corresponding responses. The present study represents a step toward bridging theoretical frameworks from various disciplines to better understand the specific effects and the conditions under which the use of anecdotal evidence and/or statistical evidence enhances or undermines information processing. In addition to theoretical contributions, this research helps inform news professionals about the benefits and pitfalls of incorporating quantitative and qualitative accounts in reporting. It proposes a typology of possible scenarios and appropriate strategies for journalists to use when presenting news with anecdotes and numbers.Keywords: data, narrative, number, anecdote, storytelling, news
Procedia PDF Downloads 79