Search results for: low magnetic field sensor
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10357

Search results for: low magnetic field sensor

97 Encapsulated Bioflavonoids: Nanotechnology Driven Food Waste Utilization

Authors: Niharika Kaushal, Minni Singh

Abstract:

Citrus fruits fall into the category of those commercially grown fruits that constitute an excellent repository of phytochemicals with health-promoting properties. Fruits belonging to the citrus family, when processed by industries, produce tons of agriculture by-products in the form of peels, pulp, and seeds, which normally have no further usage and are commonly discarded. In spite of this, such residues are of paramount importance due to their richness in valuable compounds; therefore, agro-waste is considered a valuable bioresource for various purposes in the food sector. A range of biological properties, including anti-oxidative, anti-cancerous, anti-inflammatory, anti-allergenicity, and anti-aging activity, have been reported for these bioactive compounds. Taking advantage of these inexpensive residual sources requires special attention to extract bioactive compounds. Mandarin (Citrus nobilis X Citrus deliciosa) is a potential source of bioflavonoids with antioxidant properties, and it is increasingly regarded as a functional food. Despite these benefits, flavonoids suffer from a barrier of pre-systemic metabolism in gastric fluid, which impedes their effectiveness. Therefore, colloidal delivery systems can completely overcome the barrier in question. This study involved the extraction and identification of key flavonoids from mandarin biomass. Using a green chemistry approach, supercritical fluid extraction at 330 bar, temperature 40C, and co-solvent 10% ethanol was employed for extraction, and the identification of flavonoids was made by mass spectrometry. As flavonoids are concerned with a limitation, the obtained extract was encapsulated in polylactic-co-glycolic acid (PLGA) matrix using a solvent evaporation method. Additionally, the antioxidant potential was evaluated by the 2,2-diphenylpicrylhydrazyl (DPPH) assay. A release pattern of flavonoids was observed over time using simulated gastrointestinal fluids. From the results, it was observed that the total flavonoids extracted from the mandarin biomass were estimated to be 47.3 ±1.06 mg/ml rutin equivalents as total flavonoids. In the extract, significantly, polymethoxyflavones (PMFs), tangeretin and nobiletin were identified, followed by hesperetin and naringin. The designed flavonoid-PLGA nanoparticles exhibited a particle size between 200-250nm. In addition, the bioengineered nanoparticles had a high entrapment efficiency of nearly 80.0% and maintained stability for more than a year. Flavonoid nanoparticles showed excellent antioxidant activity with an IC50 of 0.55μg/ml. Morphological studies revealed the smooth and spherical shape of nanoparticles as visualized by Field emission scanning electron microscopy (FE-SEM). Simulated gastrointestinal studies of free extract and nanoencapsulation revealed the degradation of nearly half of the flavonoids under harsh acidic conditions in the case of free extract. After encapsulation, flavonoids exhibited sustained release properties, suggesting that polymeric encapsulates are efficient carriers of flavonoids. Thus, such technology-driven and biomass-derived products form the basis for their use in the development of functional foods with improved therapeutic potential and antioxidant properties. As a result, citrus processing waste can be considered a new resource that has high value and can be used for promoting its utilization.

Keywords: citrus, agrowaste, flavonoids, nanoparticles

Procedia PDF Downloads 129
96 Living in the Edge: Crisis in Indian Tea Industry and Social Deprivation of Tea Garden Workers in Dooars Region of India

Authors: Saraswati Kerketta

Abstract:

Tea industry is one of the oldest organised sector of India. It employs roughly 1.5 million people directly. Since the last decade Indian tea industry, especially in the northern region is experiencing worst crisis in the post-independence period. Due to many reason the prices of tea show steady decline. The workers are paid one of the lowest wage in tea industry in the world (1.5$ a day) below the UN's $2 a day for extreme poverty. The workers rely on addition benefits from plantation which includes food, housing and medical facilities. These have been effective means of enslavement of generations of labourers by the owners. There is hardly any change in the tea estates where the owners determine the fate of workers. When the tea garden is abandoned or is closed all the facilities disappear immediately. The workers are the descendants of tribes from central India also known as 'tea tribes'. Alienated from their native place, the geographical and social isolation compounded their vulnerability of these people. The economy of the region being totally dependent on tea has resulted in absolute unemployment for the workers of these tea gardens. With no other livelihood and no land to grow food, thousands of workers faced hunger and starvation. The Plantation Labour Act which ensures the decent working and living condition is violated continuously. The labours are forced to migrate and are also exposed to the risk of human trafficking. Those who are left behind suffers from starvation, malnutrition and disease. The condition in the sick tea plantation is no better. Wage are not paid regularly, subsidised food, fuel are also not supplied properly. Health care facilities are in very bad shape. Objectives: • To study the socio-cultural and demographic characteristics of the tea garden labourers in the study area. • To examine the social situation of workers in sick estates in dooars region. • To assess the magnitude of deprivation the impact of economic crisis on abandoned and closed tea estates in the region. Data Base: The study is based on data collected from field survey. Methods: Quantative: Cross-Tabulation, Regression analysis. Qualitative: Household Survey, Focussed Group Discussion, In-depth interview of key informants. Findings: Purchasing power parity has declined since in last three decades. There has been many fold increase in migration. Males migrates long distance towards central and west and south India. Females and children migrates both long and short distance. No one has reported to migrate back to the place of origin of their ancestors. Migrant males work mostly as construction labourers and as factory workers whereas females and children work as domestic help and construction labourers. In about 37 cases either they haven't contacted their families in last six months or are not traceable. The families with single earning members are more likely to migrate. Burden of disease and the duration of sickness, abandonment and closure of plantation are closely related. Death tolls are likely to rise 1.5 times in sick tea gardens and three times in closed tea estates. Sixty percent of the people are malnourished in the sick tea gardens and more than eighty five per cent in abandoned and sick tea gardens.

Keywords: migration, trafficking, starvation death, tea garden workers

Procedia PDF Downloads 383
95 Monte Carlo Risk Analysis of a Carbon Abatement Technology

Authors: Hameed Rukayat Opeyemi, Pericles Pilidis, Pagone Emanuele

Abstract:

Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5 cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbo machinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50 % cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low temperature heat exchanger LTHX (referred to by some authors as air pre-heater the mixed conductive membrane responsible for oxygen transfer and the high temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. This paper discusses techno-economic analysis of four possible layouts of the AZEP cycle. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout) – AZEP 85 % (85 % CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine– AZEP 85 % (85 % CO2 capture). This paper discusses Montecarlo risk analysis of four possible layouts of the AZEP cycle.

Keywords: gas turbine, global warming, green house gases, power plants

Procedia PDF Downloads 472
94 Design and Implementation of an Affordable Electronic Medical Records in a Rural Healthcare Setting: A Qualitative Intrinsic Phenomenon Case Study

Authors: Nitika Sharma, Yogesh Jain

Abstract:

Introduction: An efficient Information System helps in improving the service delivery as well provides the foundation for policy and regulation of other building blocks of Health System. Health care organizations require an integrated working of its various sub-systems. An efficient EMR software boosts the teamwork amongst the various sub-systems thereby resulting in improved service delivery. Although there has been a huge impetus to EMR under the Digital India initiative, it has still not been mandated in India. It is generally implemented in huge funded public or private healthcare organizations only. Objective: The study was conducted to understand the factors that lead to the successful adoption of an affordable EMR in the low level healthcare organization. It intended to understand the design of the EMR and address the solutions to the challenges faced in adoption of the EMR. Methodology: The study was conducted in a non-profit registered Healthcare organization that has been providing healthcare facilities to more than 2500 villages including certain areas that are difficult to access. The data was collected with help of field notes, in-depth interviews and participant observation. A total of 16 participants using the EMR from different departments were enrolled via purposive sampling technique. The participants included in the study were working in the organization before the implementation of the EMR system. The study was conducted in one month period from 25 June-20 July 2018. The Ethical approval was taken from the institute along with prior approval of the participants. Data analysis: A word document of more than 4000 words was obtained after transcribing and translating the answers of respondents. It was further analyzed by focused coding, a line by line review of the transcripts, underlining words, phrases or sentences that might suggest themes to do thematic narrative analysis. Results: Based on the answers the results were thematically grouped under four headings: 1. governance of organization, 2. architecture and design of the software, 3. features of the software, 4. challenges faced in adoption and the solutions to address them. It was inferred that the successful implementation was attributed to the easy and comprehensive design of the system which has facilitated not only easy data storage and retrieval but contributes in constructing a decision support system for the staff. Portability has lead to increased acceptance by physicians. The proper division of labor, increased efficiency of staff, incorporation of auto-correction features and facilitation of task shifting has lead to increased acceptance amongst the users of various departments. Geographical inhibitions, low computer literacy and high patient load were the major challenges faced during its implementation. Despite of dual efforts made both by the architects and administrators to combat these challenges, there are still certain ongoing challenges faced by organization. Conclusion: Whenever any new technology is adopted there are certain innovators, early adopters, late adopters and laggards. The same pattern was followed in adoption of this software. He challenges were overcome with joint efforts of organization administrators and users as well. Thereby this case study provides a framework of implementing similar systems in public sector of countries that are struggling for digitizing the healthcare in presence of crunch of human and financial resources.

Keywords: EMR, healthcare technology, e-health, EHR

Procedia PDF Downloads 105
93 The Politics of Identity and Retributive Genocidal Massacre against Chena Amhara under International Humanitarian Law

Authors: Gashaw Sisay Zenebe

Abstract:

Northern-Ethiopian conflict that broke out on 04 November 2020 between the central government and TPLF caused destruction beyond imagination in all aspects; millions of people have been killed, including civilians, mainly women, and children. Civilians have been indiscriminately attacked simply because of their ethnic or religious identity. Warrying parties committed serious crimes of international concern opposite to International Humanitarian Law (IHL). A House of People Representatives (HPR) declared that the terrorist Tigrean Defense Force (TDF), encompassing all segments of its people, waged war against North Gondar through human flooding. On Aug 30, 2021, after midnight, TDF launched a surprise attack against Chena People who had been drunk and deep slept due to the annual festivity. Unlike the lowlands, however, ENDF conjoined the local people to fight TDF in these Highland areas. This research examines identity politics and the consequential genocidal massacre of Chena, including its human and physical destructions that occurred as a result of the armed conflict. As such, the study could benefit international entities by helping them develop a better understanding of what happened in Chena and trigger interest in engaging in ensuring the accountability and enforcement of IHL in the future. Preserving fresh evidence will also serve as a starting point on the road to achieving justice either nationally or internationally. To study the Chena case evaluated against IHL rules, a combination of qualitative and doctrinal research methodology has been employed. The study basically follows a unique sampling case study which has used primary data tools such as observation, interview, key-informant interview, FGD, and battle-field notes. To supplement, however, secondary sources, including books, journal articles, domestic laws, international conventions, reports, and media broadcasts, were used to give meaning to what happened on the ground in light of international law. The study proved that the war was taking place to separate Tigray from Ethiopia. While undertaking military operations to achieve this goal, mass killings, genocidal acts, and war crimes were committed over Chena and approximate sites in the Dabat district of North Gondar. Thus, hundreds of people lost their lives to the brutalities of mass killings, hundreds of people were subjected to a forcible disappearance, and tens of thousands of people were forced into displacement. Furthermore, harsh beatings, forced labor, slavery, torture, rape, and gang rape have been reported, and generally, people are subjected to pass cruel, inhuman, and degrading treatment and punishment. Also, what is so unique is that animals were indiscriminately killed completely, making the environment unsafe for human survival because of pollution and bad smells and the consequent diseases such as Cholera, Flu, and Diarrhea. In addition to TDF, ENDF’s shelling has caused destruction to farmers’ houses & claimed lives. According to humanitarian principles, acts that can establish MACs and war crimes were perpetrated. Generally, the war in this direction has shown an absolute disrespect for international law norms.

Keywords: genocide, war crimes, Tigray Defense Force, Chena, IHL

Procedia PDF Downloads 71
92 Magnesium Nanoparticles for Photothermal Therapy

Authors: E. Locatelli, I. Monaco, R. C. Martin, Y. Li, R. Pini, M. Chiariello, M. Comes Franchini

Abstract:

Despite the many advantages of application of nanomaterials in the field of nanomedicine, increasing concerns have been expressed on their potential adverse effects on human health. There is urgency for novel green strategies toward novel materials with enhanced biocompatibility using safe reagents. Photothermal ablation therapy, which exploits localized heat increase of a few degrees to kill cancer cells, has appeared recently as a non-invasive and highly efficient therapy against various cancer types; anyway new agents able to generate hyperthermia when irradiated are needed and must have precise biocompatibility in order to avoid damage to healthy tissues and prevent toxicity. Recently, there has been increasing interest in magnesium as a biomaterial: it is the fourth most abundant cation in the human body, and it is essential for human metabolism. However magnesium nanoparticles (Mg NPs) have had limited diffusion due to the high reduction potential of magnesium cations, which makes NPs synthesis challenging. Herein, we report the synthesis of Mg NPs and their surface functionalization for the obtainment of a stable and biocompatible nanomaterial suitable for photothermal ablation therapy against cancer. We synthesized the Mg crystals by reducing MgCl2 with metallic lithium and exploiting naphthalene as an electron carrier: the lithium–naphthalene complex acts as the real reducing agent. Firstly, the nanocrystal particles were coated with the ligand 12-ethoxy ester dodecanehydroxamic acid, and then entrapped into water-dispersible polymeric micelles (PMs) made of the FDA-approved PLGA-b-PEG-COOH copolymer using the oil-in-water emulsion technique. Lately, we developed a more straightforward methodology by introducing chitosan, a highly biocompatible natural product, at the beginning of the process, simultaneously using lithium–naphthalene complex, thus having a one-pot procedure for the formation and surface modification of MgNPs. The obtained MgNPs were purified and fully characterized, showing diameters in the range of 50-300 nm. Notably, when coated with chitosan the particles remained stable as dry powder for more than 10 months. We proved the possibility of generating a temperature rise of a few to several degrees once MgNPs were illuminated using a 810 nm diode laser operating in continuous wave mode: the temperature rise resulted significant (0-15 °C) and concentration dependent. We then investigated potential cytotoxicity of the MgNPs: we used HN13 epithelial cells, derived from a head and neck squamous cell carcinoma and the hepa1-6 cell line, derived from hepatocellular carcinoma and very low toxicity was observed for both nanosystems. Finally, in vivo photothermal therapy was performed on xenograft hepa1-6 tumor bearing mice: the animals were treated with MgNPs coated with chitosan and showed no sign of suffering after the injection. After 12 hours the tumor was exposed to near-infrared laser light. The results clearly showed an extensive damage to tumor tissue after only 2 minutes of laser irradiation at 3Wcm-1, while no damage was reported when the tumor was treated with the laser and saline alone in control group. Despite the lower photothermal efficiency of Mg with respect to Au NPs, we consider MgNPs a promising, safe and green candidate for future clinical translations.

Keywords: chitosan, magnesium nanoparticles, nanomedicine, photothermal therapy

Procedia PDF Downloads 270
91 Distributed Listening in Intensive Care: Nurses’ Collective Alarm Responses Unravelled through Auditory Spatiotemporal Trajectories

Authors: Michael Sonne Kristensen, Frank Loesche, James Foster, Elif Ozcan, Judy Edworthy

Abstract:

Auditory alarms play an integral role in intensive care nurses’ daily work. Most medical devices in the intensive care unit (ICU) are designed to produce alarm sounds in order to make nurses aware of immediate or prospective safety risks. The utilisation of sound as a carrier of crucial patient information is highly dependent on nurses’ presence - both physically and mentally. For ICU nurses, especially the ones who work with stationary alarm devices at the patient bed space, it is a challenge to display ‘appropriate’ alarm responses at all times as they have to navigate with great flexibility in a complex work environment. While being primarily responsible for a small number of allocated patients they are often required to engage with other nurses’ patients, relatives, and colleagues at different locations inside and outside the unit. This work explores the social strategies used by a team of nurses to comprehend and react to the information conveyed by the alarms in the ICU. Two main research questions guide the study: To what extent do alarms from a patient bed space reach the relevant responsible nurse by direct auditory exposure? By which means do responsible nurses get informed about their patients’ alarms when not directly exposed to the alarms? A comprehensive video-ethnographic field study was carried out to capture and evaluate alarm-related events in an ICU. The study involved close collaboration with four nurses who wore eye-level cameras and ear-level binaural audio recorders during several work shifts. At all time the entire unit was monitored by multiple video and audio recorders. From a data set of hundreds of hours of recorded material information about the nurses’ location, social interaction, and alarm exposure at any point in time was coded in a multi-channel replay-interface. The data shows that responsible nurses’ direct exposure and awareness of the alarms of their allocated patients vary significantly depending on work load, social relationships, and the location of the patient’s bed space. Distributed listening is deliberately employed by the nursing team as a social strategy to respond adequately to alarms, but the patterns of information flow prompted by alarm-related events are not uniform. Auditory Spatiotemporal Trajectory (AST) is proposed as a methodological label to designate the integration of temporal, spatial and auditory load information. As a mixed-method metrics it provides tangible evidence of how nurses’ individual alarm-related experiences differ from one another and from stationary points in the ICU. Furthermore, it is used to demonstrate how alarm-related information reaches the individual nurse through principles of social and distributed cognition, and how that information relates to the actual alarm event. Thereby it bridges a long-standing gap in the literature on medical alarm utilisation between, on the one hand, initiatives to measure objective data of the medical sound environment without consideration for any human experience, and, on the other hand, initiatives to study subjective experiences of the medical sound environment without detailed evidence of the objective characteristics of the environment.

Keywords: auditory spatiotemporal trajectory, medical alarms, social cognition, video-ethography

Procedia PDF Downloads 190
90 The Use of Artificial Intelligence in the Context of a Space Traffic Management System: Legal Aspects

Authors: George Kyriakopoulos, Photini Pazartzis, Anthi Koskina, Crystalie Bourcha

Abstract:

The need for securing safe access to and return from outer space, as well as ensuring the viability of outer space operations, maintains vivid the debate over the promotion of organization of space traffic through a Space Traffic Management System (STM). The proliferation of outer space activities in recent years as well as the dynamic emergence of the private sector has gradually resulted in a diverse universe of actors operating in outer space. The said developments created an increased adverse impact on outer space sustainability as the case of the growing number of space debris clearly demonstrates. The above landscape sustains considerable threats to outer space environment and its operators that need to be addressed by a combination of scientific-technological measures and regulatory interventions. In this context, recourse to recent technological advancements and, in particular, to Artificial Intelligence (AI) and machine learning systems, could achieve exponential results in promoting space traffic management with respect to collision avoidance as well as launch and re-entry procedures/phases. New technologies can support the prospects of a successful space traffic management system at an international scale by enabling, inter alia, timely, accurate and analytical processing of large data sets and rapid decision-making, more precise space debris identification and tracking and overall minimization of collision risks and reduction of operational costs. What is more, a significant part of space activities (i.e. launch and/or re-entry phase) takes place in airspace rather than in outer space, hence the overall discussion also involves the highly developed, both technically and legally, international (and national) Air Traffic Management System (ATM). Nonetheless, from a regulatory perspective, the use of AI for the purposes of space traffic management puts forward implications that merit particular attention. Key issues in this regard include the delimitation of AI-based activities as space activities, the designation of the applicable legal regime (international space or air law, national law), the assessment of the nature and extent of international legal obligations regarding space traffic coordination, as well as the appropriate liability regime applicable to AI-based technologies when operating for space traffic coordination, taking into particular consideration the dense regulatory developments at EU level. In addition, the prospects of institutionalizing international cooperation and promoting an international governance system, together with the challenges of establishment of a comprehensive international STM regime are revisited in the light of intervention of AI technologies. This paper aims at examining regulatory implications advanced by the use of AI technology in the context of space traffic management operations and its key correlating concepts (SSA, space debris mitigation) drawing in particular on international and regional considerations in the field of STM (e.g. UNCOPUOS, International Academy of Astronautics, European Space Agency, among other actors), the promising advancements of the EU approach to AI regulation and, last but not least, national approaches regarding the use of AI in the context of space traffic management, in toto. Acknowledgment: The present work was co-funded by the European Union and Greek national funds through the Operational Program "Human Resources Development, Education and Lifelong Learning " (NSRF 2014-2020), under the call "Supporting Researchers with an Emphasis on Young Researchers – Cycle B" (MIS: 5048145).

Keywords: artificial intelligence, space traffic management, space situational awareness, space debris

Procedia PDF Downloads 258
89 Optimizing Solids Control and Cuttings Dewatering for Water-Powered Percussive Drilling in Mineral Exploration

Authors: S. J. Addinell, A. F. Grabsch, P. D. Fawell, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising down-hole water-powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barren cover. This system has shown superior rates of penetration in water-rich, hard rock formations at depths exceeding 500 metres. With fluid flow rates of up to 120 litres per minute at 200 bar operating pressure to energise the bottom hole tooling, excessive quantities of high quality drilling fluid (water) would be required for a prolonged drilling campaign. As a result, drilling fluid recovery and recycling has been identified as a necessary option to minimise costs and logistical effort. While the majority of the cuttings report as coarse particles, a significant fines fraction will typically also be present. To maximise tool life longevity, the percussive bottom hole assembly requires high quality fluid with minimal solids loading and any recycled fluid needs to have a solids cut point below 40 microns and a concentration less than 400 ppm before it can be used to reenergise the system. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process shows a strong power law relationship for particle size distributions. This data is critical in optimising solids control strategies and cuttings dewatering techniques. Optimisation of deployable solids control equipment is discussed and how the required centrate clarity was achieved in the presence of pyrite-rich metasediment cuttings. Key results were the successful pre-aggregation of fines through the selection and use of high molecular weight anionic polyacrylamide flocculants and the techniques developed for optimal dosing prior to scroll decanter centrifugation, thus keeping sub 40 micron solids loading within prescribed limits. Experiments on maximising fines capture in the presence of thixotropic drilling fluid additives (e.g. Xanthan gum and other biopolymers) are also discussed. As no core is produced during the drilling process, it is intended that the particle laden returned drilling fluid is used for top-of-hole geochemical and mineralogical assessment. A discussion is therefore presented on the biasing and latency of cuttings representivity by dewatering techniques, as well as the resulting detrimental effects on depth fidelity and accuracy. Data pertaining to the sample biasing with respect to geochemical signatures due to particle size distributions is presented and shows that, depending on the solids control and dewatering techniques used, it can have unwanted influence on top-of-hole analysis. Strategies are proposed to overcome these effects, improving sample quality. Successful solids control and cuttings dewatering for water-powered percussive drilling is presented, contributing towards the successful advancement of coiled tubing based greenfields mineral exploration.

Keywords: cuttings, dewatering, flocculation, percussive drilling, solids control

Procedia PDF Downloads 248
88 Preliminary Results on Marine Debris Classification in The Island of Mykonos (Greece) via Coastal and Underwater Clean up over 2016-20: A Successful Case of Recycling Plastics into Useful Daily Items

Authors: Eleni Akritopoulou, Katerina Topouzoglou

Abstract:

The last 20 years marine debris has been identified as one of the main marine pollution sources caused by anthropogenic activities. Plastics has reached the farthest marine areas of the planet affecting all marine trophic levels including the, recently discovered, amphipoda Eurythenes plasticus inhabiting Mariana Trench to large cetaceans, marine reptiles and sea birds causing immunodeficiency disorders, deteriorating health and death overtime. For the time period 2016-20, in the framework of the national initiative ‘Keep Aegean Blue”, All for Blue team has been collecting marine debris (coastline and underwater) following a modified in situ MEDSEALITTER monitoring protocol from eight Greek islands. After collection, marine debris was weighted, sorted and categorised according to material; plastic (PL), glass (G), metal (M), wood (W), rubber (R), cloth (CL), paper (P), mixed (MX). The goal of the project included the documentation of marine debris sources, human trends, waste management and public marine environmental awareness. Waste management was focused on plastics recycling and utilisation into daily useful products. This research is focused on the island of Mykonos due to its continuous touristic activity and lack of scientific information. In overall, a field work area of 1.832.856 m2 was cleaned up yielding 5092 kg of marine debris. The preliminary results indicated PL as main source of marine debris (62,8%) followed by M (15,5%), GL (13,2%) and MX (2,8%). Main items found were fishing tools (lines, nets), disposable cutlery, cups and straws, cigarette butts, flip flops and other items like plastic boat compartments. In collaboration with a local company for plastic management and the Circular Economy and Eco Innovation Institute (Sweden), all plastic debris was recycled. Granulation process was applied transforming plastic into building materials used for refugees’ houses, litter bins bought by municipalities and schools and, other items like shower components. In terms of volunteering and attendance in public awareness seminars, there was a raise of interest by 63% from different age ranges and professions. Regardless, the research being fairly new for Mykonos island and logistics issues potentially affected systemic sampling, it appeared that plastic debris is the main littering source attributed, possibly to the intense touristic activity of the island all year around. However, marine environmental awareness activities were pointed out to be an effective tool in forming public perception against marine debris and, alter the daily habits of local society. Since the beginning of this project, three new local environmental teams were formed against marine pollution supported by the local authorities and stakeholders. The continuous need and request for the production of items made by recycled marine debris appeared to be beneficial socio-economically to the local community and actions are taken to expand the project nationally. Finally, as an ongoing project and whilst, new scientific information is collected, further funding and research is needed.

Keywords: Greece, marine debris, marine environmental awareness, Mykonos island, plastics debris, plastic granulation, recycled plastic, tourism, waste management

Procedia PDF Downloads 110
87 How the Writer Tells the Story Should Be the Primary Concern rather than Who Can Write about Whom: The Limits of Cultural Appropriation Vis-à-Vis The Ethics of Narrative Empathy

Authors: Alexandra Cheira

Abstract:

Cultural appropriation has been theorised as a form of colonialism in which members of a dominant culture reduce cultural elements that are deeply meaningful to a minority culture to the category of the “exotic other” since they do not experience the oppression and discriminations faced by members of the minority culture. Yet, in the particular case of literature, writers such as Lionel Shriver and Bernardine Evaristo have argued that authors from a cultural majority have a right to write in the voice of someone from a cultural minority, hence attacking the idea that this is a form of cultural appropriation. By definition, Shriver and Evaristo claim, writers are supposed to write beyond their own culture, gender, class, and/ or race. In this light, this paper discusses the limits of cultural appropriation vis-à-vis the ethics of narrative empathy by addressing the mixed critical reception of Kathryn Stockett’s The Help (2009) and Jeanine Cummins’s American Dirt (2020). In fact, both novels were acclaimed as global eye-openers regarding the struggles of respectively South American migrants and African American maids. At the same time, both novelists have been accused of cultural appropriation by telling a story that is not theirs to tell, given the fact that they are white women telling these stories in what critics have argued is really an American voice telling a story to American readers.These claims will be investigated within the framework of Edward Said’s foundational examination of Orientalism in the field of postcolonial studies as a Western style for authoritatively restructuring the Orient. This means that Orientalist stereotypes regarding Eastern cultures have implicitly validated colonial and imperial pursuits, in the specific context of literary representations of African American and Mexican cultures by white writers. At the same time, the conflicted reception of American Dirt and The Help will be examined within the critical framework of narrative empathy as theorised by Suzanne Keen. Hence, there will be a particular focus on the way a reader’s heated perception that the author’s perspective is purely dishonest can result from a friction between an author’s intention and a reader’s experience of narrative empathy, while a shared sense of empathy between authors and readers can be a rousing momentum to move beyond literary response to social action.Finally, in order to assess that “the key question should not be who can write about whom, but how the writer tells the story”, the recent controversy surrounding Dutch author Marieke Lucas Rijneveld’s decision to resign the translation of American poet Amanda Gorman’s work into Dutch will be duly investigated. In fact, Rijneveld stepped out after journalist and activist Janice Deul criticised Dutch publisher Meulenhoff for choosing a translator who was not also Black, despite the fact that 22-year-old Gorman had selected the 29-year-old Rijneveld herself, as a fellow young writer who had likewise come to fame early on in life. In this light, the critical argument that the controversial reception of The Help reveals as much about US race relations in the early twenty-first century as about the complex literary transactions between individual readers and the novel itself will also be discussed in the extended context of American Dirt and white author Marieke Rijneveld’s withdrawal from the projected translation of Black poet Amanda Gorman.

Keywords: cultural appropriation, cultural stereotypes, narrative empathy, race relations

Procedia PDF Downloads 70
86 Harnessing the Benefits and Mitigating the Challenges of Neurosensitivity for Learners: A Mixed Methods Study

Authors: Kaaryn Cater

Abstract:

People vary in how they perceive, process, and react to internal, external, social, and emotional environmental factors; some are more sensitive than others. Compassionate people have a highly reactive nervous system and are more impacted by positive and negative environmental conditions (Differential Susceptibility). Further, some sensitive individuals are disproportionately able to benefit from positive and supportive environments without necessarily suffering negative impacts in less supportive environments (Vantage Sensitivity). Environmental sensitivity is underpinned by physiological, genetic, and personality/temperamental factors, and the phenotypic expression of high sensitivity is Sensory Processing Sensitivity. The hallmarks of Sensory Processing Sensitivity are deep cognitive processing, emotional reactivity, high levels of empathy, noticing environmental subtleties, a tendency to observe new and novel situations, and a propensity to become overwhelmed when over-stimulated. Several educational advantages associated with high sensitivity include creativity, enhanced memory, divergent thinking, giftedness, and metacognitive monitoring. High sensitivity can also lead to some educational challenges, particularly managing multiple conflicting demands and negotiating low sensory thresholds. A mixed methods study was undertaken. In the first quantitative study, participants completed the Perceived Success in Study Survey (PSISS) and the Highly Sensitive Person Scale (HSPS-12). Inclusion criteria were current or previous postsecondary education experience. The survey was presented on social media, and snowball recruitment was employed (n=365). The Excel spreadsheets were uploaded to the statistical package for the social sciences (SPSS)26, and descriptive statistics found normal distribution. T-tests and analysis of variance (ANOVA) calculations found no difference in the responses of demographic groups, and Principal Components Analysis and the posthoc Tukey calculations identified positive associations between high sensitivity and three of the five PSISS factors. Further ANOVA calculations found positive associations between the PSISS and two of the three sensitivity subscales. This study included a response field to register interest in further research. Respondents who scored in the 70th percentile on the HSPS-12 were invited to participate in a semi-structured interview. Thirteen interviews were conducted remotely (12 female). Reflexive inductive thematic analysis was employed to analyse data, and a descriptive approach was employed to present data reflective of participant experience. The results of this study found that compassionate students prioritize work-life balance; employ a range of practical metacognitive study and self-care strategies; value independent learning; connect with learning that is meaningful; and are bothered by aspects of the physical learning environment, including lighting, noise, and indoor environmental pollutants. There is a dearth of research investigating sensitivity in the educational context, and these studies highlight the need to promote widespread education sector awareness of environmental sensitivity, and the need to include sensitivity in sector and institutional diversity and inclusion initiatives.

Keywords: differential susceptibility, highly sensitive person, learning, neurosensitivity, sensory processing sensitivity, vantage sensitivity

Procedia PDF Downloads 65
85 Posts by Influencers Promoting Water Saving: The Impact of Distance and the Perception of Effectiveness on Behavior

Authors: Sancho-Esper Franco, Rodríguez Sánchez Carla, Sánchez Carolina, Orús-Sanclemente Carlos

Abstract:

Water scarcity is a reality that affects many regions of the world and is aggravated by climate change and population growth. Saving water has become an urgent need to ensure the sustainability of the planet and the survival of many communities, where youth and social networks play a key role in promoting responsible practices and adopting habits that contribute to environmental preservation. This study analyzes the persuasion capacity of messages designed to promote pro-environmental behaviors among youth. Specifically, it studies how the efficacy (effectiveness) of the response (personal response efficacy/effectiveness) and the perception of distance from the source of the message influence the water-saving behavior of the audience. To do so, two communication frameworks are combined. First, the Construal Level Theory, which is based on the concept of "psychological distance", that is, people, objects or events can be perceived as psychologically near or far, and this subjective distance (i.e., social, temporal, or spatial) determines their attitudes, emotions, and actions. This perceived distance can be social, temporal, or spatial. This research focuses on studying the spatial distance and social distance generated by cultural differences between influencers and their audience to understand how cultural distance can influence the persuasiveness of a message. Research on the effects of psychological distance between influencers-followers in the pro-environmental field is very limited, being relevant because people could learn specific behaviors suggested by opinion leaders such as influencers in social networks. Second, different approaches to behavioral change suggest that the perceived efficacy of a behavior can explain individual pro-environmental actions. People will be more likely to adopt a new behavior if they perceive that they are capable of performing it (efficacy belief) and that their behavior will effectively contribute to solving that problem (personal response efficacy). It is also important to study the different actors (social and individual) that are perceived as responsible for addressing environmental problems. Specifically, we analyze to what extent the belief individual’s water-saving actions are effective in solving the problem can influence water-saving behavior since this individual effectiveness increases people's sense of obligation and responsibility with the problem. However, in this regard, empirical evidence presents mixed results. Our study addresses the call for experimental studies manipulating different subtypes of response effectiveness to generate robust causal evidence. Based on all the above, this research analyzes whether cultural distance (local vs. international influencer) and the perception of effectiveness of behavior (personal response efficacy) (personal/individual vs. collective) affect the actual behavior and the intention to conserve water of social network users. An experiment of 2 (local influencer vs. international influencer) x 2 (effectiveness of individual vs. collective response) is designed and estimated. The results show that a message from a local influencer appealing to individual responsibility exerts greater influence on intention and actual water-saving behavior, given the cultural closeness between influencer-follower, and the appeal to individual responsibility increases the feeling of obligation to participate in pro-environmental actions. These results offer important implications for social marketing campaigns that seek to promote water conservation.

Keywords: social marketing, influencer, message framing, experiment, personal response efficacy, water saving

Procedia PDF Downloads 62
84 Colloid-Based Biodetection at Aqueous Electrical Interfaces Using Fluidic Dielectrophoresis

Authors: Francesca Crivellari, Nicholas Mavrogiannis, Zachary Gagnon

Abstract:

Portable diagnostic methods have become increasingly important for a number of different purposes: point-of-care screening in developing nations, environmental contamination studies, bio/chemical warfare agent detection, and end-user use for commercial health monitoring. The cheapest and most portable methods currently available are paper-based – lateral flow and dipstick methods are widely available in drug stores for use in pregnancy detection and blood glucose monitoring. These tests are successful because they are cheap to produce, easy to use, and require minimally invasive sampling. While adequate for their intended uses, in the realm of blood-borne pathogens and numerous cancers, these paper-based methods become unreliable, as they lack the nM/pM sensitivity currently achieved by clinical diagnostic methods. Clinical diagnostics, however, utilize techniques involving surface plasmon resonance (SPR) and enzyme-linked immunosorbent assays (ELISAs), which are expensive and unfeasible in terms of portability. To develop a better, competitive biosensor, we must reduce the cost of one, or increase the sensitivity of the other. Electric fields are commonly utilized in microfluidic devices to manipulate particles, biomolecules, and cells. Applications in this area, however, are primarily limited to interfaces formed between immiscible interfaces. Miscible, liquid-liquid interfaces are common in microfluidic devices, and are easily reproduced with simple geometries. Here, we demonstrate the use of electrical fields at liquid-liquid electrical interfaces, known as fluidic dielectrophoresis, (fDEP) for biodetection in a microfluidic device. In this work, we apply an AC electric field across concurrent laminar streams with differing conductivities and permittivities to polarize the interface and induce a discernible, near-immediate, frequency-dependent interfacial tilt. We design this aqueous electrical interface, which becomes the biosensing “substrate,” to be intelligent – it “moves” only when a target of interest is present. This motion requires neither labels nor expensive electrical equipment, so the biosensor is inexpensive and portable, yet still capable of sensitive detection. Nanoparticles, due to their high surface-area-to-volume ratio, are often incorporated to enhance detection capabilities of schemes like SPR and fluorimetric assays. Most studies currently investigate binding at an immobilized solid-liquid or solid-gas interface, where particles are adsorbed onto a planar surface, functionalized with a receptor to create a reactive substrate, and subsequently flushed with a fluid or gas with the relevant analyte. These typically involve many preparation and rinsing steps, and are susceptible to surface fouling. Our microfluidic device is continuously flowing and renewing the “substrate,” and is thus not subject to fouling. In this work, we demonstrate the ability to electrokinetically detect biomolecules binding to functionalized nanoparticles at liquid-liquid interfaces using fDEP. In biotin-streptavidin experiments, we report binding detection limits on the order of 1-10 pM, without amplifying signals or concentrating samples. We also demonstrate the ability to detect this interfacial motion, and thus the presence of binding, using impedance spectroscopy, allowing this scheme to become non-optical, in addition to being label-free.

Keywords: biodetection, dielectrophoresis, microfluidics, nanoparticles

Procedia PDF Downloads 388
83 Examining Three Psychosocial Factors of Tax Compliance in Self-Employed Individuals using the Mindspace Framework - Evidence from Australia and Pakistan

Authors: Amna Tariq Shah

Abstract:

Amid the pandemic, the contemporary landscape has experienced accelerated growth in small business activities and an expanding digital marketplace, further exacerbating the issue of non-compliance among self-employed individuals through aggressive tax planning and evasion. This research seeks to address these challenges by developing strategic tax policies that promote voluntary compliance and improve taxpayer facilitation. The study employs the innovative MINDSPACE framework to examine three psychosocial factors—tax communication, tax literacy, and shaming—to optimize policy responses, address administrative shortcomings, and ensure adequate revenue collection for public goods and services. Preliminary findings suggest that incomprehensible communication from tax authorities drives individuals to seek alternative, potentially biased sources of tax information, thereby exacerbating non-compliance. Furthermore, the study reveals low tax literacy among Australian and Pakistani respondents, with many struggling to navigate complex tax processes and comprehend tax laws. Consequently, policy recommendations include simplifying tax return filing and enhancing pre-populated tax returns. In terms of shaming, the research indicates that Australians, being an individualistic society, may not respond well to shaming techniques due to privacy concerns. In contrast, Pakistanis, as a collectivistic society, may be more receptive to naming and shaming approaches. The study employs a mixed-method approach, utilizing interviews and surveys to analyze the issue in both jurisdictions. The use of mixed methods allows for a more comprehensive understanding of tax compliance behavior, combining the depth of qualitative insights with the generalizability of quantitative data, ultimately leading to more robust and well-informed policy recommendations. By examining evidence from opposite jurisdictions, namely a developed country (Australia) and a developing country (Pakistan), the study's applicability is enhanced, providing perspectives from two disparate contexts that offer insights from opposite ends of the economic, cultural, and social spectra. The non-comparative case study methodology offers valuable insights into human behavior, which can be applied to other jurisdictions as well. The application of the MINDSPACE framework in this research is particularly significant, as it introduces a novel approach to tax compliance behavior analysis. By integrating insights from behavioral economics, the framework enables a comprehensive understanding of the psychological and social factors influencing taxpayer decision-making, facilitating the development of targeted and effective policy interventions. This research carries substantial importance as it addresses critical challenges in tax compliance and administration, with far-reaching implications for revenue collection and the provision of public goods and services. By investigating the psychosocial factors that influence taxpayer behavior and utilizing the MINDSPACE framework, the study contributes invaluable insights to the field of tax policy. These insights can inform policymakers and tax administrators in developing more effective tax policies that enhance taxpayer facilitation, address administrative obstacles, promote a more equitable and efficient tax system, and foster voluntary compliance, ultimately strengthening the financial foundation of governments and communities.

Keywords: individual tax compliance behavior, psychosocial factors, tax non-compliance, tax policy

Procedia PDF Downloads 75
82 Fuzzy Multi-Objective Approach for Emergency Location Transportation Problem

Authors: Bidzina Matsaberidze, Anna Sikharulidze, Gia Sirbiladze, Bezhan Ghvaberidze

Abstract:

In the modern world emergency management decision support systems are actively used by state organizations, which are interested in extreme and abnormal processes and provide optimal and safe management of supply needed for the civil and military facilities in geographical areas, affected by disasters, earthquakes, fires and other accidents, weapons of mass destruction, terrorist attacks, etc. Obviously, these kinds of extreme events cause significant losses and damages to the infrastructure. In such cases, usage of intelligent support technologies is very important for quick and optimal location-transportation of emergency service in order to avoid new losses caused by these events. Timely servicing from emergency service centers to the affected disaster regions (response phase) is a key task of the emergency management system. Scientific research of this field takes the important place in decision-making problems. Our goal was to create an expert knowledge-based intelligent support system, which will serve as an assistant tool to provide optimal solutions for the above-mentioned problem. The inputs to the mathematical model of the system are objective data, as well as expert evaluations. The outputs of the system are solutions for Fuzzy Multi-Objective Emergency Location-Transportation Problem (FMOELTP) for disasters’ regions. The development and testing of the Intelligent Support System were done on the example of an experimental disaster region (for some geographical zone of Georgia) which was generated using a simulation modeling. Four objectives are considered in our model. The first objective is to minimize an expectation of total transportation duration of needed products. The second objective is to minimize the total selection unreliability index of opened humanitarian aid distribution centers (HADCs). The third objective minimizes the number of agents needed to operate the opened HADCs. The fourth objective minimizes the non-covered demand for all demand points. Possibility chance constraints and objective constraints were constructed based on objective-subjective data. The FMOELTP was constructed in a static and fuzzy environment since the decisions to be made are taken immediately after the disaster (during few hours) with the information available at that moment. It is assumed that the requests for products are estimated by homeland security organizations, or their experts, based upon their experience and their evaluation of the disaster’s seriousness. Estimated transportation times are considered to take into account routing access difficulty of the region and the infrastructure conditions. We propose an epsilon-constraint method for finding the exact solutions for the problem. It is proved that this approach generates the exact Pareto front of the multi-objective location-transportation problem addressed. Sometimes for large dimensions of the problem, the exact method requires long computing times. Thus, we propose an approximate method that imposes a number of stopping criteria on the exact method. For large dimensions of the FMOELTP the Estimation of Distribution Algorithm’s (EDA) approach is developed.

Keywords: epsilon-constraint method, estimation of distribution algorithm, fuzzy multi-objective combinatorial programming problem, fuzzy multi-objective emergency location/transportation problem

Procedia PDF Downloads 321
81 Lessons Learned through a Bicultural Approach to Tsunami Education in Aotearoa New Zealand

Authors: Lucy H. Kaiser, Kate Boersen

Abstract:

Kura Kaupapa Māori (kura) and bilingual schools are primary schools in Aotearoa/New Zealand which operate fully or partially under Māori custom and have curricula developed to include Te Reo Māori and Tikanga Māori (Māori language and cultural practices). These schools were established to support Māori children and their families through reinforcing cultural identity by enabling Māori language and culture to flourish in the field of education. Māori kaupapa (values), Mātauranga Māori (Māori knowledge) and Te Reo are crucial considerations for the development of educational resources developed for kura, bilingual and mainstream schools. The inclusion of hazard risk in education has become an important issue in New Zealand due to the vulnerability of communities to a plethora of different hazards. Māori have an extensive knowledge of their local area and the history of hazards which is often not appropriately recognised within mainstream hazard education resources. Researchers from the Joint Centre for Disaster Research, Massey University and East Coast LAB (Life at the Boundary) in Napier were funded to collaboratively develop a toolkit of tsunami risk reduction activities with schools located in Hawke’s Bay’s tsunami evacuation zones. A Māori-led bicultural approach to developing and running the education activities was taken, focusing on creating culturally and locally relevant materials for students and schools as well as giving students a proactive role in making their communities better prepared for a tsunami event. The community-based participatory research is Māori-centred, framed by qualitative and Kaupapa Maori research methodologies and utilizes a range of data collection methods including interviews, focus groups and surveys. Māori participants, stakeholders and the researchers collaborated through the duration of the project to ensure the programme would align with the wider school curricula and kaupapa values. The education programme applied a tuakana/teina, Māori teaching and learning approach in which high school aged students (tuakana) developed tsunami preparedness activities to run with primary school students (teina). At the end of the education programme, high school students were asked to reflect on their participation, what they had learned and what they had enjoyed during the activities. This paper draws on lessons learned throughout this research project. As an exemplar, retaining a bicultural and bilingual perspective resulted in a more inclusive project as there was variability across the students’ levels of confidence using Te Reo and Māori knowledge and cultural frameworks. Providing a range of different learning and experiential activities including waiata (Māori songs), pūrākau (traditional stories) and games was important to ensure students had the opportunity to participate and contribute using a range of different approaches that were appropriate to their individual learning needs. Inclusion of teachers in facilitation also proved beneficial in assisting classroom behavioral management. Lessons were framed by the tikanga and kawa (protocols) of the school to maintain cultural safety for the researchers and the students. Finally, the tuakana/teina component of the education activities became the crux of the programme, demonstrating a path for Rangatahi to support their whānau and communities through facilitating disaster preparedness, risk reduction and resilience.

Keywords: school safety, indigenous, disaster preparedness, children, education, tsunami

Procedia PDF Downloads 122
80 Long-Term Tillage, Lime Matter and Cover Crop Effects under Heavy Soil Conditions in Northern Lithuania

Authors: Aleksandras Velykis, Antanas Satkus

Abstract:

Clay loam and clay soils are typical for northern Lithuania. These soils are susceptible to physical degradation in the case of intensive use of heavy machinery for field operations. However, clayey soils having poor physical properties by origin require more intensive tillage to maintain proper physical condition for grown crops. Therefore not only choice of suitable tillage system is very important for these soils in the region, but also additional search of other measures is essential for good soil physical state maintenance. Research objective: To evaluate the long-term effects of different intensity tillage as well as its combinations with supplementary agronomic practices on improvement of soil physical conditions and environmental sustainability. The experiment examined the influence of deep and shallow ploughing, ploughless tillage, combinations of ploughless tillage with incorporation of lime sludge and cover crop for green manure and application of the same cover crop for mulch without autumn tillage under spring and winter crop growing conditions on clay loam (27% clay, 50% silt, 23% sand) Endocalcaric Endogleyic Cambisol. Methods: The indicators characterizing the impact of investigated measures were determined using the following methods and devices: Soil dry bulk density – by Eijkelkamp cylinder (100 cm3), soil water content – by weighing, soil structure – by Retsch sieve shaker, aggregate stability – by Eijkelkamp wet sieving apparatus, soil mineral nitrogen – in 1 N KCL extract using colorimetric method. Results: Clay loam soil physical state (dry bulk density, structure, aggregate stability, water content) depends on tillage system and its combination with additional practices used. Application of cover crop winter mulch without tillage in autumn, ploughless tillage and shallow ploughing causes the compaction of bottom (15-25 cm) topsoil layer. However, due to ploughless tillage the soil dry bulk density in subsoil (25-35 cm) layer is less compared to deep ploughing. Soil structure in the upper (0-15 cm) topsoil layer and in the seedbed (0-5 cm), prepared for spring crops is usually worse when applying the ploughless tillage or cover crop mulch without autumn tillage. Application of lime sludge under ploughless tillage conditions helped to avoid the compaction and structure worsening in upper topsoil layer, as well as increase aggregate stability. Application of reduced tillage increased soil water content at upper topsoil layer directly after spring crop sowing. However, due to reduced tillage the water content in all topsoil markedly decreased when droughty periods lasted for a long time. Combination of reduced tillage with cover crop for green manure and winter mulch is significant for preserving the environment. Such application of cover crops reduces the leaching of mineral nitrogen into the deeper soil layers and environmental pollution. This work was supported by the National Science Program ‘The effect of long-term, different-intensity management of resources on the soils of different genesis and on other components of the agro-ecosystems’ [grant number SIT-9/2015] funded by the Research Council of Lithuania.

Keywords: clay loam, endocalcaric endogleyic cambisol, mineral nitrogen, physical state

Procedia PDF Downloads 226
79 Dynamic Facades: A Literature Review on Double-Skin Façade with Lightweight Materials

Authors: Victor Mantilla, Romeu Vicente, António Figueiredo, Victor Ferreira, Sandra Sorte

Abstract:

Integrating dynamic facades into contemporary building design is shaping a new era of energy efficiency and user comfort. These innovative facades, often constructed using lightweight construction systems and materials, offer an opportunity to have a responsive and adaptive nature to the dynamic behavior of the outdoor climate. Therefore, in regions characterized by high fluctuations in daily temperatures, the ability to adapt to environmental changes is of paramount importance and a challenge. This paper presents a thorough review of the state of the art on double-skin facades (DSF), focusing on lightweight solutions for the external envelope. Dynamic facades featuring elements like movable shading devices, phase change materials, and advanced control systems have revolutionized the built environment. They offer a promising path for reducing energy consumption while enhancing occupant well-being. Lightweight construction systems are increasingly becoming the choice for the constitution of these facade solutions, offering benefits such as reduced structural loads and reduced construction waste, improving overall sustainability. However, the performance of dynamic facades based on low thermal inertia solutions in climatic contexts with high thermal amplitude is still in need of research since their ability to adapt is traduced in variability/manipulation of the thermal transmittance coefficient (U-value). Emerging technologies can enable such a dynamic thermal behavior through innovative materials, changes in geometry and control to optimize the facade performance. These innovations will allow a facade system to respond to shifting outdoor temperature, relative humidity, wind, and solar radiation conditions, ensuring that energy efficiency and occupant comfort are both met/coupled. This review addresses the potential configuration of double-skin facades, particularly concerning their responsiveness to seasonal variations in temperature, with a specific focus on addressing the challenges posed by winter and summer conditions. Notably, the design of a dynamic facade is significantly shaped by several pivotal factors, including the choice of materials, geometric considerations, and the implementation of effective monitoring systems. Within the realm of double skin facades, various configurations are explored, encompassing exhaust air, supply air, and thermal buffering mechanisms. According to the review places a specific emphasis on the thermal dynamics at play, closely examining the impact of factors such as the color of the facade, the slat angle's dimensions, and the positioning and type of shading devices employed in these innovative architectural structures.This paper will synthesize the current research trends in this field, with the presentation of case studies and technological innovations with a comprehensive understanding of the cutting-edge solutions propelling the evolution of building envelopes in the face of climate change, namely focusing on double-skin lightweight solutions to create sustainable, adaptable, and responsive building envelopes. As indicated in the review, flexible and lightweight systems have broad applicability across all building sectors, and there is a growing recognition that retrofitting existing buildings may emerge as the predominant approach.

Keywords: adaptive, control systems, dynamic facades, energy efficiency, responsive, thermal comfort, thermal transmittance

Procedia PDF Downloads 80
78 A Descriptive Study on Water Scarcity as a One Health Challenge among the Osiram Community, Kajiado County, Kenya

Authors: Damiano Omari, Topirian Kerempe, Dibo Sama, Walter Wafula, Sharon Chepkoech, Chrispine Juma, Gilbert Kirui, Simon Mburu, Susan Keino

Abstract:

The One Health concept was officially adopted by the international organizations and scholarly bodies in 1984. It aims at combining human, animal and environmental components to address global health challenges. Using collaborative efforts optimal health to people, animals, and the environment can be achieved. One health approach plays a significant approach role in prevention and control of zoonosis diseases. It has also been noted that 75% of new emerging human infectious diseases are zoonotic. In Kenya, one health has been embraced and strongly advocated for by One Health East and Central Africa (OHCEA). It was inaugurated on 17th of October 2010 at a historic meeting facilitated by USAID with participants from 7 public health schools, seven faculties of veterinary medicine in Eastern Africa and 2 American universities (Tufts and University of Minnesota) in addition to respond project staff. The study was conducted in Loitoktok Sub County, specifically in the Amboseli Ecosystem. The Amboseli ecosystem covers an area of 5,700 square kilometers and stretches between Mt. Kilimanjaro, Chyulu Hills, Tsavo West National park and the Kenya/Tanzania border. The area is arid to semi-arid and is more suitable for pastoralism with a high potential for conservation of wildlife and tourism enterprises. The ecosystem consists of the Amboseli National Park, which is surrounded by six group ranches which include Kimana, Olgulului, Selengei, Mbirikani, Kuku and Rombo in Loitoktok District. The Manyatta of study was Osiram Cultural Manyatta in Mbirikani group ranch. Apart from visiting the Manyatta, we also visited the sub-county hospital, slaughter slab, forest service, Kimana market, and the Amboseli National Park. The aim of the study was to identify the one health issues facing the community. This was done by a conducting a community needs assessment and prioritization. Different methods were used in data collection for the qualitative and numerical data. They include among others; key informant interviews and focus group discussions. We also guided the community members in drawing their Resource Map this helped identify the major resources in their land and also help them identify some of the issues they were facing. Matrix piling, root cause analysis, and force field analysis tools were used to establish the one health related priority issues facing community members. Skits were also used to present to the community interventions to the major one health issues. Some of the prioritized needs among the community were water scarcity and inadequate markets for their beadwork. The group intervened on the various needs of the Manyatta. For water scarcity, we educated the community on water harvesting methods using gutters as well as proper storage by the use of tanks and earth dams. The community was also encouraged to recycle and conserve water. To improve markets; we educated the community to upload their products online, a page was opened for them and uploading the photos was demonstrated to them. They were also encouraged to be innovative to attract more clients.

Keywords: Amboseli ecosystem, community interventions, community needs assessment and prioritization, one health issues

Procedia PDF Downloads 170
77 Fuzzy Data, Random Drift, and a Theoretical Model for the Sequential Emergence of Religious Capacity in Genus Homo

Authors: Margaret Boone Rappaport, Christopher J. Corbally

Abstract:

The ancient ape ancestral population from which living great ape and human species evolved had demographic features affecting their evolution. The population was large, had great genetic variability, and natural selection was effective at honing adaptations. The emerging populations of chimpanzees and humans were affected more by founder effects and genetic drift because they were smaller. Natural selection did not disappear, but it was not as strong. Consequences of the 'population crash' and the human effective population size are introduced briefly. The history of the ancient apes is written in the genomes of living humans and great apes. The expansion of the brain began before the human line emerged. Coalescence times for some genes are very old – up to several million years, long before Homo sapiens. The mismatch between gene trees and species trees highlights the anthropoid speciation processes, and gives the human genome history a fuzzy, probabilistic quality. However, it suggests traits that might form a foundation for capacities emerging later. A theoretical model is presented in which the genomes of early ape populations provide the substructure for the emergence of religious capacity later on the human line. The model does not search for religion, but its foundations. It suggests a course by which an evolutionary line that began with prosimians eventually produced a human species with biologically based religious capacity. The model of the sequential emergence of religious capacity relies on cognitive science, neuroscience, paleoneurology, primate field studies, cognitive archaeology, genomics, and population genetics. And, it emphasizes five trait types: (1) Documented, positive selection of sensory capabilities on the human line may have favored survival, but also eventually enriched human religious experience. (2) The bonobo model suggests a possible down-regulation of aggression and increase in tolerance while feeding, as well as paedomorphism – but, in a human species that remains cognitively sharp (unlike the bonobo). The two species emerged from the same ancient ape population, so it is logical to search for shared traits. (3) An up-regulation of emotional sensitivity and compassion seems to have occurred on the human line. This finds support in modern genetic studies. (4) The authors’ published model of morality's emergence in Homo erectus encompasses a cognitively based, decision-making capacity that was hypothetically overtaken, in part, by religious capacity. Together, they produced a strong, variable, biocultural capability to support human sociability. (5) The full flowering of human religious capacity came with the parietal expansion and smaller face (klinorhynchy) found only in Homo sapiens. Details from paleoneurology suggest the stage was set for human theologies. Larger parietal lobes allowed humans to imagine inner spaces, processes, and beings, and, with the frontal lobe, led to the first theologies composed of structured and integrated theories of the relationships between humans and the supernatural. The model leads to the evolution of a small population of African hominins that was ready to emerge with religious capacity when the species Homo sapiens evolved two hundred thousand years ago. By 50-60,000 years ago, when human ancestors left Africa, they were fully enabled.

Keywords: genetic drift, genomics, parietal expansion, religious capacity

Procedia PDF Downloads 341
76 Integrating Evidence Into Health Policy: Navigating Cross-Sector and Interdisciplinary Collaboration

Authors: Tessa Heeren

Abstract:

The following proposal pertains to the complex process of successfully implementing health policies that are based on public health research. A systematic review was conducted by myself and faculty at the Cluj School of Public Health in Romania. The reviewed articles covered a wide range of topics, such as barriers and facilitators to multi-sector collaboration, differences in professional cultures, and systemic obstacles. The reviewed literature identified communication, collaboration, user-friendly dissemination, and documentation of processes in the execution of applied research as important themes for the promotion of evidence in the public health decision-making process. This proposal fits into the Academy Health National Health Policy conference because it identifies and examines differences between the worlds of research and politics. Implications and new insights for federal and/or state health policy: Recommendations made based on the findings of this research include using politically relevant levers to promote research (e.g. campaign donors, lobbies, established parties, etc.), modernizing dissemination practices, and reforms in which the involvement of external stakeholders is facilitated without relying on invitations from individual policy makers. Description of how evidence and/or data was or could be used: The reviewed articles illustrated shortcomings and areas for improvement in policy research processes and collaborative development. In general, the evidence base in the field of integrating research into policy lacks critical details of the actual process of developing evidence based policy. This shortcoming in logistical details creates a barrier for potential replication of collaborative efforts described in studies. Potential impact of the presentation for health policy: The reviewed articles focused on identifying barriers and facilitators that arise in cross sector collaboration, rather than the process and impact of integrating evidence into policy. In addition, the type of evidence used in policy was rarely specified, and widely varying interpretations of the definition of evidence complicated overall conclusions. Background: Using evidence to inform public health decision making processes has been proven effective; however, it is not clear how research is applied in practice. Aims: The objectives of the current study were to assess the extent to which evidence is used in public health decision-making process. Methods: To identify eligible studies, seven bibliographic databases, specifically, PubMed, Scopus, Cochrane Library, Science Direct, Web of Science, ClinicalKey, Health and Safety Science Abstract were screened (search dates: 1990 – September 2015); a general internet search was also conducted. Primary research and systematic reviews about the use of evidence in public health policy in Europe were included. The studies considered for inclusion were assessed by two reviewers, along with extracted data on objective, methods, population, and results. Data were synthetized as a narrative review. Results: Of 2564 articles initially identified, 2525 titles and abstracts were screened. Ultimately, 30 articles fit the research criteria by describing how or why evidence is used/not used in public health policy. The majority of included studies involved interviews and surveys (N=17). Study participants were policy makers, health care professionals, researchers, community members, service users, experts in public health.

Keywords: cross-sector, dissemination, health policy, policy implementation

Procedia PDF Downloads 225
75 A Microwave Heating Model for Endothermic Reaction in the Cement Industry

Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

Microwave technology has been gaining importance in contributing to decarbonization processes in high energy demand industries. Despite the several numerical models presented in the literature, a proper Verification and Validation exercise is still lacking. This is important and required to evaluate the physical process model accuracy and adequacy. Another issue addresses impedance matching, which is an important mechanism used in microwave experiments to increase electromagnetic efficiency. Such mechanism is not available in current computational tools, thus requiring an external numerical procedure. A numerical model was implemented to study the continuous processing of limestone with microwave heating. This process requires the material to be heated until a certain temperature that will prompt a highly endothermic reaction. Both a 2D and 3D model were built in COMSOL Multiphysics to solve the two-way coupling between Maxwell and Energy equations, along with the coupling between both heat transfer phenomena and limestone endothermic reaction. The 2D model was used to study and evaluate the required numerical procedure, being also a benchmark test, allowing other authors to implement impedance matching procedures. To achieve this goal, a controller built in MATLAB was used to continuously matching the cavity impedance and predicting the required energy for the system, thus successfully avoiding energy inefficiencies. The 3D model reproduces realistic results and therefore supports the main conclusions of this work. Limestone was modeled as a continuous flow under the transport of concentrated species, whose material and kinetics properties were taken from literature. Verification and Validation of the coupled model was taken separately from the chemical kinetic model. The chemical kinetic model was found to correctly describe the chosen kinetic equation by comparing numerical results with experimental data. A solution verification was made for the electromagnetic interface, where second order and fourth order accurate schemes were found for linear and quadratic elements, respectively, with numerical uncertainty lower than 0.03%. Regarding the coupled model, it was demonstrated that the numerical error would diverge for the heat transfer interface with the mapped mesh. Results showed numerical stability for the triangular mesh, and the numerical uncertainty was less than 0.1%. This study evaluated limestone velocity, heat transfer, and load influence on thermal decomposition and overall process efficiency. The velocity and heat transfer coefficient were studied with the 2D model, while different loads of material were studied with the 3D model. Both models demonstrated to be highly unstable when solving non-linear temperature distributions. High velocity flows exhibited propensity to thermal runways, and the thermal efficiency showed the tendency to stabilize for the higher velocities and higher filling ratio. Microwave efficiency denoted an optimal velocity for each heat transfer coefficient, pointing out that electromagnetic efficiency is a consequence of energy distribution uniformity. The 3D results indicated the inefficient development of the electric field for low filling ratios. Thermal efficiencies higher than 90% were found for the higher loads and microwave efficiencies up to 75% were accomplished. The 80% fill ratio was demonstrated to be the optimal load with an associated global efficiency of 70%.

Keywords: multiphysics modeling, microwave heating, verification and validation, endothermic reactions modeling, impedance matching, limestone continuous processing

Procedia PDF Downloads 140
74 Charcoal Traditional Production in Portugal: Contribution to the Quantification of Air Pollutant Emissions

Authors: Cátia Gonçalves, Teresa Nunes, Inês Pina, Ana Vicente, C. Alves, Felix Charvet, Daniel Neves, A. Matos

Abstract:

The production of charcoal relies on rudimentary technologies using traditional brick kilns. Charcoal is produced under pyrolysis conditions: breaking down the chemical structure of biomass under high temperature in the absence of air. The amount of the pyrolysis products (charcoal, pyroligneous extract, and flue gas) depends on various parameters, including temperature, time, pressure, kiln design, and wood characteristics like the moisture content. This activity is recognized for its inefficiency and high pollution levels, but it is poorly characterized. This activity is widely distributed and is a vital economic activity in certain regions of Portugal, playing a relevant role in the management of woody residues. The location of the units establishes the biomass used for charcoal production. The Portalegre district, in the Alto Alentejo region (Portugal), is a good example, essentially with rural characteristics, with a predominant farming, agricultural, and forestry profile, and with a significant charcoal production activity. In this district, a recent inventory identifies almost 50 charcoal production units, equivalent to more than 450 kilns, of which 80% appear to be in operation. A field campaign was designed with the objective of determining the composition of the emissions released during a charcoal production cycle. A total of 30 samples of particulate matter and 20 gas samples in Tedlar bags were collected. Particulate and gas samplings were performed in parallel, 2 in the morning and 2 in the afternoon, alternating the inlet heads (PM₁₀ and PM₂.₅), in the particulate sampler. The gas and particulate samples were collected in the plume as close as the emission chimney point. The biomass (dry basis) used in the carbonization process was a mixture of cork oak (77 wt.%), holm oak (7 wt.%), stumps (11 wt.%), and charred wood (5 wt.%) from previous carbonization processes. A cylindrical batch kiln (80 m³) with 4.5 m diameter and 5 m of height was used in this study. The composition of the gases was determined by gas chromatography, while the particulate samples (PM₁₀, PM₂.₅) were subjected to different analytical techniques (thermo-optical transmission technique, ion chromatography, HPAE-PAD, and GC-MS after solvent extraction) after prior gravimetric determination, to study their organic and inorganic constituents. The charcoal production cycle presents widely varying operating conditions, which will be reflected in the composition of gases and particles produced and emitted throughout the process. The concentration of PM₁₀ and PM₂.₅ in the plume was calculated, ranging between 0.003 and 0.293 g m⁻³, and 0.004 and 0.292 g m⁻³, respectively. Total carbon, inorganic ions, and sugars account, in average, for PM10 and PM₂.₅, 65 % and 56 %, 2.8 % and 2.3 %, 1.27 %, and 1.21 %, respectively. The organic fraction studied until now includes more than 30 aliphatic compounds and 20 PAHs. The emission factors of particulate matter to produce charcoal in the traditional kiln were 33 g/kg (wooddb) and 27 g/kg (wooddb) for PM₁₀ and PM₂.₅, respectively. With the data obtained in this study, it is possible to fill the lack of information about the environmental impact of the traditional charcoal production in Portugal. Acknowledgment: Authors thanks to FCT – Portuguese Science Foundation, I.P. and to Ministry of Science, Technology and Higher Education of Portugal for financial support within the scope of the project CHARCLEAN (PCIF/GVB/0179/2017) and CESAM (UIDP/50017/2020 + UIDB/50017/2020).

Keywords: brick kilns, charcoal, emission factors, PAHs, total carbon

Procedia PDF Downloads 142
73 Expanded Polyurethane Foams and Waterborne-Polyurethanes from Vegetable Oils

Authors: A.Cifarelli, L. Boggioni, F. Bertini, L. Magon, M. Pitalieri, S. Losio

Abstract:

Nowadays, the growing environmental awareness and the dwindling of fossil resources stimulate the polyurethane (PU) industry towards renewable polymers with low carbon footprint to replace the feed stocks from petroleum sources. The main challenge in this field consists in replacing high-performance products from fossil-fuel with novel synthetic polymers derived from 'green monomers'. The bio-polyols from plant oils have attracted significant industrial interest and major attention in scientific research due to their availability and biodegradability. Triglycerides rich in unsaturated fatty acids, such as soybean oil (SBO) and linseed oil (ELO), are particularly interesting because their structures and functionalities are tunable by chemical modification in order to obtain polymeric materials with expected final properties. Unfortunately, their use is still limited for processing or performance problems because a high functionality, as well as OH number of the polyols will result in an increase in cross-linking densities of the resulting PUs. The main aim of this study is to evaluate soy and linseed-based polyols as precursors to prepare prepolymers for the production of polyurethane foams (PUFs) or waterborne-polyurethanes (WPU) used as coatings. An effective reaction route is employed for its simplicity and economic impact. Indeed, bio-polyols were synthesized by a two-step method: epoxidation of the double bonds in vegetable oils and solvent-free ring-opening reaction of the oxirane with organic acids. No organic solvents have been used. Acids with different moieties (aliphatic or aromatics) and different length of hydrocarbon backbones can be used to customize polyols with different functionalities. The ring-opening reaction requires a fine tuning of the experimental conditions (time, temperature, molar ratio of carboxylic acid and epoxy group) to control the acidity value of end-product as well as the amount of residual starting materials. Besides, a Lewis base catalyst is used to favor the ring opening reaction of internal epoxy groups of the epoxidized oil and minimize the formation of cross-linked structures in order to achieve less viscous and more processable polyols with narrower polydispersity indices (molecular weight lower than 2000 g/mol⁻¹). The functionality of optimized polyols is tuned from 2 to 4 per molecule. The obtained polyols are characterized by means of GPC, NMR (¹H, ¹³C) and FT-IR spectroscopy to evaluate molecular masses, molecular mass distributions, microstructures and linkage pathways. Several polyurethane foams have been prepared by prepolymer method blending conventional synthetic polyols with new bio-polyols from soybean and linseed oils without using organic solvents. The compatibility of such bio-polyols with commercial polyols and diisocyanates is demonstrated. The influence of the bio-polyols on the foam morphology (cellular structure, interconnectivity), density, mechanical and thermal properties has been studied. Moreover, bio-based WPUs have been synthesized by well-established processing technology. In this synthesis, a portion of commercial polyols is substituted by the new bio-polyols and the properties of the coatings on leather substrates have been evaluated to determine coating hardness, abrasion resistance, impact resistance, gloss, chemical resistance, flammability, durability, and adhesive strength.

Keywords: bio-polyols, polyurethane foams, solvent free synthesis, waterborne-polyurethanes

Procedia PDF Downloads 130
72 Effect of a Chatbot-Assisted Adoption of Self-Regulated Spaced Practice on Students' Vocabulary Acquisition and Cognitive Load

Authors: Ngoc-Nguyen Nguyen, Hsiu-Ling Chen, Thanh-Truc Lai Huynh

Abstract:

In foreign language learning, vocabulary acquisition has consistently posed challenges to learners, especially for those at lower levels. Conventional approaches often fail to promote vocabulary learning and ensure engaging experiences alike. The emergence of mobile learning, particularly the integration of chatbot systems, has offered alternative ways to facilitate this practice. Chatbots have proven effective in educational contexts by offering interactive learning experiences in a constructivist manner. These tools have caught attention in the field of mobile-assisted language learning (MALL) in recent years. This research is conducted in an English for Specific Purposes (ESP) course at the A2 level of the CEFR, designed for non-English majors. Participants are first-year Vietnamese students aged 18 to 20 at a university. This quasi-experimental study follows a pretest-posttest control group design over five weeks, with two classes randomly assigned as the experimental and control groups. The experimental group engages in chatbot-assisted spaced practice with SRL components, while the control group uses the same spaced practice without SRL. The two classes are taught by the same lecturer. Data are collected through pre- and post-tests, cognitive load surveys, and semi-structured interviews. The combination of self-regulated learning (SRL) and distributed practice, grounded in the spacing effect, forms the basis of the present study. SRL elements, which concern goal setting and strategy planning, are integrated into the system. The spaced practice method, similar to those used in widely recognized learning platforms like Duolingo and Anki flashcards, spreads out learning over multiple sessions. This study’s design features quizzes progressively increasing in difficulty. These quizzes are aimed at targeting both the Recognition-Recall and Comprehension-Use dimensions for a comprehensive acquisition of vocabulary. The mobile-based chatbot system is built using Golang, an open-source programming language developed by Google. It follows a structured flow that guides learners through a series of 4 quizzes in each week of teacher-led learning. The quizzes start with less cognitively demanding tasks, such as multiple-choice questions, before moving on to more complex exercises. The integration of SRL elements allows students to self-evaluate the difficulty level of vocabulary items, predict scores achieved, and choose appropriate strategy. This research is part one of a two-part project. The initial findings will determine the development of an upgraded chatbot system in part two, where adaptive features in response to the integration of SRL components will be introduced. The research objectives are to assess the effectiveness of the chatbot-assisted approach, based on the combination of spaced practice and SRL, in improving vocabulary acquisition and managing cognitive load, as well as to understand students' perceptions of this learning tool. The insights from this study will contribute to the growing body of research on mobile-assisted language learning and offer practical implications for integrating chatbot systems with spaced practice into educational settings to enhance vocabulary learning.

Keywords: mobile learning, mobile-assisted language learning, MALL, chatbots, vocabulary learning, spaced practice, spacing effect, self-regulated learning, SRL, self-regulation, EFL, cognitive load

Procedia PDF Downloads 19
71 Neologisms and Word-Formation Processes in Board Game Rulebook Corpus: Preliminary Results

Authors: Athanasios Karasimos, Vasiliki Makri

Abstract:

This research focuses on the design and development of the first text Corpus based on Board Game Rulebooks (BGRC) with direct application on the morphological analysis of neologisms and tendencies in word-formation processes. Corpus linguistics is a dynamic field that examines language through the lens of vast collections of texts. These corpora consist of diverse written and spoken materials, ranging from literature and newspapers to transcripts of everyday conversations. By morphologically analyzing these extensive datasets, morphologists can gain valuable insights into how language functions and evolves, as these extensive datasets can reflect the byproducts of inflection, derivation, blending, clipping, compounding, and neology. This entails scrutinizing how words are created, modified, and combined to convey meaning in a corpus of challenging, creative, and straightforward texts that include rules, examples, tutorials, and tips. Board games teach players how to strategize, consider alternatives, and think flexibly, which are critical elements in language learning. Their rulebooks reflect not only their weight (complexity) but also the language properties of each genre and subgenre of these games. Board games are a captivating realm where strategy, competition, and creativity converge. Beyond the excitement of gameplay, board games also spark the art of word creation. Word games, like Scrabble, Codenames, Bananagrams, Wordcraft, Alice in the Wordland, Once uUpona Time, challenge players to construct words from a pool of letters, thus encouraging linguistic ingenuity and vocabulary expansion. These games foster a love for language, motivating players to unearth obscure words and devise clever combinations. On the other hand, the designers and creators produce rulebooks, where they include their joy of discovering the hidden potential of language, igniting the imagination, and playing with the beauty of words, making these games a delightful fusion of linguistic exploration and leisurely amusement. In this research, more than 150 rulebooks in English from all types of modern board games, either language-independent or language-dependent, are used to create the BGRC. A representative sample of each genre (family, party, worker placement, deckbuilding, dice, and chance games, strategy, eurogames, thematic, role-playing, among others) was selected based on the score from BoardGameGeek, the size of the texts and the level of complexity (weight) of the game. A morphological model with morphological networks, multi-word expressions, and word-creation mechanics based on the complexity of the textual structure, difficulty, and board game category will be presented. In enabling the identification of patterns, trends, and variations in word formation and other morphological processes, this research aspires to make avail of this creative yet strict text genre so as to (a) give invaluable insight into morphological creativity and innovation that (re)shape the lexicon of the English language and (b) test morphological theories. Overall, it is shown that corpus linguistics empowers us to explore the intricate tapestry of language, and morphology in particular, revealing its richness, flexibility, and adaptability in the ever-evolving landscape of human expression.

Keywords: board game rulebooks, corpus design, morphological innovations, neologisms, word-formation processes

Procedia PDF Downloads 99
70 Black-Box-Optimization Approach for High Precision Multi-Axes Forward-Feed Design

Authors: Sebastian Kehne, Alexander Epple, Werner Herfs

Abstract:

A new method for optimal selection of components for multi-axes forward-feed drive systems is proposed in which the choice of motors, gear boxes and ball screw drives is optimized. Essential is here the synchronization of electrical and mechanical frequency behavior of all axes because even advanced controls (like H∞-controls) can only control a small part of the mechanical modes – namely only those of observable and controllable states whose value can be derived from the positions of extern linear length measurement systems and/or rotary encoders on the motor or gear box shafts. Further problems are the unknown processing forces like cutting forces in machine tools during normal operation which make the estimation and control via an observer even more difficult. To start with, the open source Modelica Feed Drive Library which was developed at the Laboratory for Machine Tools, and Production Engineering (WZL) is extended from one axis design to the multi axes design. It is capable to simulate the mechanical, electrical and thermal behavior of permanent magnet synchronous machines with inverters, different gear boxes and ball screw drives in a mechanical system. To keep the calculation time down analytical equations are used for field and torque producing equivalent circuit, heat dissipation and mechanical torque at the shaft. As a first step, a small machine tool with a working area of 635 x 315 x 420 mm is taken apart, and the mechanical transfer behavior is measured with an impulse hammer and acceleration sensors. With the frequency transfer functions, a mechanical finite element model is built up which is reduced with substructure coupling to a mass-damper system which models the most important modes of the axes. The model is modelled with Modelica Feed Drive Library and validated by further relative measurements between machine table and spindle holder with a piezo actor and acceleration sensors. In a next step, the choice of possible components in motor catalogues is limited by derived analytical formulas which are based on well-known metrics to gain effective power and torque of the components. The simulation in Modelica is run with different permanent magnet synchronous motors, gear boxes and ball screw drives from different suppliers. To speed up the optimization different black-box optimization methods (Surrogate-based, gradient-based and evolutionary) are tested on the case. The objective that was chosen is to minimize the integral of the deviations if a step is given on the position controls of the different axes. Small values are good measures for a high dynamic axes. In each iteration (evaluation of one set of components) the control variables are adjusted automatically to have an overshoot less than 1%. It is obtained that the order of the components in optimization problem has a deep impact on the speed of the black-box optimization. An approach to do efficient black-box optimization for multi-axes design is presented in the last part. The authors would like to thank the German Research Foundation DFG for financial support of the project “Optimierung des mechatronischen Entwurfs von mehrachsigen Antriebssystemen (HE 5386/14-1 | 6954/4-1)” (English: Optimization of the Mechatronic Design of Multi-Axes Drive Systems).

Keywords: ball screw drive design, discrete optimization, forward feed drives, gear box design, linear drives, machine tools, motor design, multi-axes design

Procedia PDF Downloads 287
69 Intrigues of Brand Activism versus Brand Antagonism in Rival Online Football Brand Communities: The Case of the Top Two Premier Football Clubs in Ghana

Authors: Joshua Doe, George Amoako

Abstract:

Purpose: In an increasingly digital world, the realm of sports fandom has extended its borders, creating a vibrant ecosystem of online communities centered around football clubs. This study ventures into the intricate interplay of motivations that drive football fans to respond to brand activism and its profound implications for brand antagonism and engagement among two of Ghana's most revered premier football clubs. Methods: A sample of 459 fervent fans from these two rival clubs were engaged through self-administered questionnaires expertly distributed via social media and online platforms. Data was analysed, using PLS-SEM. Findings: The tapestry of motivations that weave through these online football communities is as diverse as the fans themselves. It becomes apparent that fans are propelled by a spectrum of incentives. They seek education, yearn for information, revel in entertainment, embrace socialization, and fortify their self-esteem through their interactions within these digital spaces. Yet, it is the nuanced distinction in these motivations that shapes the trajectory of brand antagonism and engagement. Surprisingly, the study reveals a remarkable pattern. Football fans, despite their fierce rivalries, do not engage in brand antagonism based on educational pursuits, information-seeking endeavors, or socialization. Instead, it is motivations rooted in entertainment and self-esteem that serve as the fertile grounds for brand antagonism. Paradoxically, it is these very motivations coupled with the desire for socialization that nurture brand engagement, manifesting as active support and advocacy for their chosen club brand. Originality: Our research charters new waters by extending the boundaries of existing theories in the field. The Technology Acceptance Uses and Gratifications Theory, and Social Identity Theory all find new dimensions within the context of online brand community engagement. This not only deepens our understanding of the multifaceted world of online football fandom but also invites us to explore the implications these insights carry within the digital realm. Contribution to Practice: For marketers, our findings offer a treasure trove of actionable insights. They beckon the development of targeted content strategies that resonate with fan motivations. The implementation of brand advocacy programs, fostering opportunities for socialization, and the effective management of brand antagonism emerge as pivotal strategies. Furthermore, the utilization of data-driven insights is poised to refine consumer engagement strategies and strengthen brand affinity. Future Studies: For future studies, we advocate for longitudinal, cross-cultural, and qualitative studies that could shed further light on this topic. Comparative analyses across different types of online brand communities, an exploration of the role of brand community leaders, and inquiries into the factors that contribute to brand community dissolution all beckon the research community. Furthermore, understanding motivation-specific antagonistic behaviors and the intricate relationship between information-seeking and engagement present exciting avenues for further exploration. This study unfurls a vibrant tapestry of fan motivations, brand activism, and rivalry within online football communities. It extends a hand to scholars and marketers alike, inviting them to embark on a journey through this captivating digital realm, where passion, rivalry, and engagement harmonize to shape the world of sports fandom as we know it.

Keywords: online brand engagement, football fans, brand antagonism, motivations

Procedia PDF Downloads 65
68 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs

Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu

Abstract:

This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.

Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network

Procedia PDF Downloads 63