Search results for: visual loss
349 Cloud Based Supply Chain Traceability
Authors: Kedar J. Mahadeshwar
Abstract:
Concept introduction: This paper talks about how an innovative cloud based analytics enabled solution that could address a major industry challenge that is approaching all of us globally faster than what one would think. The world of supply chain for drugs and devices is changing today at a rapid speed. In the US, the Drug Supply Chain Security Act (DSCSA) is a new law for Tracing, Verification and Serialization phasing in starting Jan 1, 2015 for manufacturers, repackagers, wholesalers and pharmacies / clinics. Similarly we are seeing pressures building up in Europe, China and many countries that would require an absolute traceability of every drug and device end to end. Companies (both manufacturers and distributors) can use this opportunity not only to be compliant but to differentiate themselves over competition. And moreover a country such as UAE can be the leader in coming up with a global solution that brings innovation in this industry. Problem definition and timing: The problem of counterfeit drug market, recognized by FDA, causes billions of dollars loss every year. Even in UAE, the concerns over prevalence of counterfeit drugs, which enter through ports such as Dubai remains a big concern, as per UAE pharma and healthcare report, Q1 2015. Distribution of drugs and devices involves multiple processes and systems that do not talk to each other. Consumer confidence is at risk due to this lack of traceability and any leading provider is at risk of losing its reputation. Globally there is an increasing pressure by government and regulatory bodies to trace serial numbers and lot numbers of every drug and medical devices throughout a supply chain. Though many of large corporations use some form of ERP (enterprise resource planning) software, it is far from having a capability to trace a lot and serial number beyond the enterprise and making this information easily available real time. Solution: The solution here talks about a service provider that allows all subscribers to take advantage of this service. The solution allows a service provider regardless of its physical location, to host this cloud based traceability and analytics solution of millions of distribution transactions that capture lots of each drug and device. The solution platform will capture a movement of every medical device and drug end to end from its manufacturer to a hospital or a doctor through a series of distributor or retail network. The platform also provides advanced analytics solution to do some intelligent reporting online. Why Dubai? Opportunity exists with huge investment done in Dubai healthcare city also with using technology and infrastructure to attract more FDI to provide such a service. UAE and countries similar will be facing this pressure from regulators globally in near future. But more interestingly, Dubai can attract such innovators/companies to run and host such a cloud based solution and become a hub of such traceability globally.Keywords: cloud, pharmaceutical, supply chain, tracking
Procedia PDF Downloads 526348 Forming Form, Motivation and Their Biolinguistic Hypothesis: The Case of Consonant Iconicity in Tashelhiyt Amazigh and English
Authors: Noury Bakrim
Abstract:
When dealing with motivation/arbitrariness, forming form (Forma Formans) and morphodynamics are to be grasped as relevant implications of enunciation/enactment, schematization within the specificity of language as sound/meaning articulation. Thus, the fact that a language is a form does not contradict stasis/dynamic enunciation (reflexivity vs double articulation). Moreover, some languages exemplify the role of the forming form, uttering, and schematization (roots in Semitic languages, the Chinese case). Beyond the evolutionary biosemiotic process (form/substance bifurcation, the split between realization/representation), non-isomorphism/asymmetry between linguistic form/norm and linguistic realization (phonetics for instance) opens up a new horizon problematizing the role of Brain – sensorimotor contribution in the continuous forming form. Therefore, we hypothesize biotization as both process/trace co-constructing motivation/forming form. Henceforth, referring to our findings concerning distribution and motivation patterns within Berber written texts (pulse based obstruents and nasal-lateral levels in poetry) and oral storytelling (consonant intensity clustering in quantitative and semantic/prosodic motivation), we understand consonant clustering, motivation and schematization as a complex phenomenon partaking in patterns of oral/written iconic prosody and reflexive metalinguistic representation opening the stable form. We focus our inquiry on both Amazigh and English clusters (/spl/, /spr/) and iconic consonant iteration in [gnunnuy] (to roll/tumble), [smummuy] (to moan sadly or crankily). For instance, the syllabic structures of /splaeʃ/ and /splaet/ imply an anamorphic representation of the state of the world: splash, impact on aquatic surfaces/splat impact on the ground. The pair has stridency and distribution as distinctive features which specify its phonetic realization (and a part of its meaning) /ʃ/ is [+ strident] and /t/ is [+ distributed] on the vocal tract. Schematization is then a process relating both physiology/code as an arthron vocal/bodily, vocal/practical shaping of the motor-articulatory system, leading to syntactic/semantic thematization (agent/patient roles in /spl/, /sm/ and other clusters or the tense uvular /qq/ at the initial position in Berber). Furthermore, the productivity of serial syllable sequencing in Berber points out different expressivity forms. We postulate two Components of motivated formalization: i) the process of memory paradigmatization relating to sequence modeling under sensorimotor/verbal specific categories (production/perception), ii) the process of phonotactic selection - prosodic unconscious/subconscious distribution by virtue of iconicity. Basing on multiple tests including a questionnaire, phonotactic/visual recognition and oral/written reproduction, we aim at patterning/conceptualizing consonant schematization and motivation among EFL and Amazigh (Berber) learners and speakers integrating biolinguistic hypotheses.Keywords: consonant motivation and prosody, language and order of life, anamorphic representation, represented representation, biotization, sensori-motor and brain representation, form, formalization and schematization
Procedia PDF Downloads 142347 Community Perception towards the Major Drivers for Deforestation and Land Degradation of Choke Afro-alpine and Sub-afro alpine Ecosystem, Northwest Ethiopia
Authors: Zelalem Teshager
Abstract:
The Choke Mountains have several endangered and endemic wildlife species and provide important ecosystem services. Despite their environmental importance, the Choke Mountains are found in dangerous conditions. This raised the need for an evaluation of the community's perception of deforestation and its major drivers and suggested possible solutions in the Choke Mountains of northwestern Ethiopia. For this purpose, household surveys, key informant interviews, and focus group discussions were used. A total sample of 102 informants was used for this survey. A purposive sampling technique was applied to select the participants for in-depth interviews and focus group discussions. Both qualitative and quantitative data analyses were used. Computation of descriptive statistics such as mean, percentages, frequency, tables, figures, and graphs was applied to organize, analyze, and interpret the study. This study assessed smallholder agricultural land expansion, Fuel wood collection, population growth; encroachment, free grazing, high demand of construction wood, unplanned resettlement, unemployment, border conflict, lack of a strong forest protecting system, and drought were the serious causes of forest depletion reported by local communities. Loss of land productivity, Soil erosion, soil fertility decline, increasing wind velocity, rising temperature, and frequency of drought were the most perceived impacts of deforestation. Most of the farmers have a holistic understanding of forest cover change. Strengthening forest protection, improving soil and water conservation, enrichment planting, awareness creation, payment for ecosystem services, and zero grazing campaigns were mentioned as possible solutions to the current state of deforestation. Applications of Intervention measures, such as animal fattening, beekeeping, and fruit production can contribute to decreasing the deforestation causes and improve communities’ livelihood. In addition, concerted efforts of conservation will ensure that the forests’ ecosystems contribute to increased ecosystem services. The major drivers of deforestation should be addressed with government intervention to change dependency on forest resources, income sources of the people, and institutional set-up of the forestry sector. Overall, further reduction in anthropogenic pressure is urgent and crucial for the recovery of the afro-alpine vegetation and the interrelated endangered wildlife in the Choke Mountains.Keywords: choke afro-alpine, deforestation, drivers, intervention measures, perceptions
Procedia PDF Downloads 53346 The Positive Effects of Processing Instruction on the Acquisition of French as a Second Language: An Eye-Tracking Study
Authors: Cecile Laval, Harriet Lowe
Abstract:
Processing Instruction is a psycholinguistic pedagogical approach drawing insights from the Input Processing Model which establishes the initial innate strategies used by second language learners to connect form and meaning of linguistic features. With the ever-growing use of technology in Second Language Acquisition research, the present study uses eye-tracking to measure the effectiveness of Processing Instruction in the acquisition of French and its effects on learner’s cognitive strategies. The experiment was designed using a TOBII Pro-TX300 eye-tracker to measure participants’ default strategies when processing French linguistic input and any cognitive changes after receiving Processing Instruction treatment. Participants were drawn from lower intermediate adult learners of French at the University of Greenwich and randomly assigned to two groups. The study used a pre-test/post-test methodology. The pre-tests (one per linguistic item) were administered via the eye-tracker to both groups one week prior to instructional treatment. One group received full Processing Instruction treatment (explicit information on the grammatical item and on the processing strategies, and structured input activities) on the primary target linguistic feature (French past tense imperfective aspect). The second group received Processing Instruction treatment except the explicit information on the processing strategies. Three immediate post-tests on the three grammatical structures under investigation (French past tense imperfective aspect, French Subjunctive used for the expression of doubt, and the French causative construction with Faire) were administered with the eye-tracker. The eye-tracking data showed the positive change in learners’ processing of the French target features after instruction with improvement in the interpretation of the three linguistic features under investigation. 100% of participants in both groups made a statistically significant improvement (p=0.001) in the interpretation of the primary target feature (French past tense imperfective aspect) after treatment. 62.5% of participants made an improvement in the secondary target item (French Subjunctive used for the expression of doubt) and 37.5% of participants made an improvement in the cumulative target feature (French causative construction with Faire). Statistically there was no significant difference between the pre-test and post-test scores in the cumulative target feature; however, the variance approximately tripled between the pre-test and the post-test (3.9 pre-test and 9.6 post-test). This suggests that the treatment does not affect participants homogenously and implies a role for individual differences in the transfer-of-training effect of Processing Instruction. The use of eye-tracking provides an opportunity for the study of unconscious processing decisions made during moment-by-moment comprehension. The visual data from the eye-tracking demonstrates changes in participants’ processing strategies. Gaze plots from pre- and post-tests display participants fixation points changing from focusing on content words to focusing on the verb ending. This change in processing strategies can be clearly seen in the interpretation of sentences in both primary and secondary target features. This paper will present the research methodology, design and results of the experimental study using eye-tracking to investigate the primary effects and transfer-of-training effects of Processing Instruction. It will then provide evidence of the cognitive benefits of Processing Instruction in Second Language Acquisition and offer suggestion in second language teaching of grammar.Keywords: eye-tracking, language teaching, processing instruction, second language acquisition
Procedia PDF Downloads 278345 Characterization of New Sources of Maize (Zea mays L.) Resistance to Sitophilus zeamais (Coleoptera: Curculionidae) Infestation in Stored Maize
Authors: L. C. Nwosu, C. O. Adedire, M. O. Ashamo, E. O. Ogunwolu
Abstract:
The maize weevil, Sitophilus zeamais Motschulsky is a notorious pest of stored maize (Zea mays L.). The development of resistant maize varieties to manage weevils is a major breeding objective. The study investigated the parameters and mechanisms that confer resistance on a maize variety to S. zeamais infestation using twenty elite maize varieties. Detailed morphological, physical and chemical studies were conducted on whole-maize grain and the grain pericarp. Resistance was assessed at 33, 56, and 90 days post infestation using weevil mortality rate, weevil survival rate, percent grain damage, percent grain weight loss, weight of grain powder, oviposition rate and index of susceptibility as indices rated on a scale developed by the present study and on Dobie’s modified scale. Linear regression models that can predict maize grain damage in relation to the duration of storage were developed and applied. The resistant varieties identified particularly 2000 SYNEE-WSTR and TZBRELD3C5 with very high degree of resistance should be used singly or best in an integrated pest management system for the control of S. zeamais infestation in stored maize. Though increases in the physical properties of grain hardness, weight, length, and width increased varietal resistance, it was found that the bases of resistance were increased chemical attributes of phenolic acid, trypsin inhibitor and crude fibre while the bases of susceptibility were increased protein, starch, magnesium, calcium, sodium, phosphorus, manganese, iron, cobalt and zinc, the role of potassium requiring further investigation. Characters that conferred resistance on the test varieties were found distributed in the pericarp and the endosperm of the grains. Increases in grain phenolic acid, crude fibre, and trypsin inhibitor adversely and significantly affected the bionomics of the weevil on further assessment. The flat side of a maize grain at the point of penetration was significantly preferred by the weevil. Why the south area of the flattened side of a maize grain was significantly preferred by the weevil is clearly unknown, even though grain-face-type seemed to be a contributor in the study. The preference shown to the south area of the grain flat side has implications for seed viability. The study identified antibiosis, preference, antixenosis, and host evasion as the mechanisms of maize post harvest resistance to Sitophilus zeamais infestation.Keywords: maize weevil, resistant, parameters, mechanisms, preference
Procedia PDF Downloads 306344 Stroke Prevention in Patients with Atrial Fibrillation and Co-Morbid Physical and Mental Health Problems
Authors: Dina Farran, Mark Ashworth, Fiona Gaughran
Abstract:
Atrial fibrillation (AF), the most prevalent cardiac arrhythmia, is associated with an increased risk of stroke, contributing to heart failure and death. In this project, we aim to improve patient safety by screening for stroke risk among people with AF and co-morbid mental illness. To do so, we started by conducting a systematic review and meta-analysis on prevalence, management, and outcomes of AF in people with Serious Mental Illness (SMI) versus the general population. We then evaluated oral anticoagulation (OAC) prescription trends in people with AF and co-morbid SMI in King’s College Hospital. We also evaluated the association between mental illness severity and OAC prescription in eligible patients in South London and Maudsley (SLaM) NHS Foundation Trust. Next, we implemented an electronic clinical decision support system (eCDSS) consisting of a visual prompt on patient electronic Personal Health Records to screen for AF-related stroke risk in three Mental Health of Older Adults wards at SLaM. Finally, we assessed the feasibility and acceptability of the eCDSS by qualitatively investigating clinicians’ perspectives of the potential usefulness of the eCDSS (pre-intervention) and their experiences and their views regarding its impact on clinicians and patients (post-intervention). The systematic review showed that people with SMI had low reported rates of AF. AF patients with SMI were less likely to receive OAC than the general population. When receiving warfarin, people with SMI, particularly bipolar disorder, experienced poor anticoagulation control compared to the general population. Meta-analysis showed that SMI was not significantly associated with an increased risk of stroke or major bleeding when adjusting for underlying risk factors. The main findings of the first observational study were that among AF patients having a high stroke risk, those with co-morbid SMI were less likely than non-SMI to be prescribed any OAC, particularly warfarin. After 2019, there was no significant difference between the two groups. In the second observational study, patients with AF and co-morbid SMI were less likely to be prescribed any OAC compared to those with dementia, substance use disorders, or common mental disorders, adjusting for age, sex, stroke, and bleeding risk scores. Among AF patients with co-morbid SMI, warfarin was less likely to be prescribed to those having alcohol or substance dependency, serious self-injury, hallucinations or delusions, and activities of daily living impairment. In the intervention, clinicians were asked to confirm the presence of AF, clinically assess stroke and bleeding risks, record risk scores in clinical notes, and refer patients at high risk of stroke to OAC clinics. Clinicians reported many potential benefits for the eCDSS, including improving clinical effectiveness, better identification of patients at risk, safer and more comprehensive care, consistency in decision making and saving time. Identified potential risks included rigidity in decision-making, overreliance, reduced critical thinking, false positive recommendations, annoyance, and increased workload. This study presents a unique opportunity to quantify AF patients with mental illness who are at high risk of severe outcomes using electronic health records. This has the potential to improve health outcomes and, therefore patients' quality of life.Keywords: atrial fibrillation, stroke, mental health conditions, electronic clinical decision support systems
Procedia PDF Downloads 49343 Addressing Supply Chain Data Risk with Data Security Assurance
Authors: Anna Fowler
Abstract:
When considering assets that may need protection, the mind begins to contemplate homes, cars, and investment funds. In most cases, the protection of those assets can be covered through security systems and insurance. Data is not the first thought that comes to mind that would need protection, even though data is at the core of most supply chain operations. It includes trade secrets, management of personal identifiable information (PII), and consumer data that can be used to enhance the overall experience. Data is considered a critical element of success for supply chains and should be one of the most critical areas to protect. In the supply chain industry, there are two major misconceptions about protecting data: (i) We do not manage or store confidential/personally identifiable information (PII). (ii) Reliance on Third-Party vendor security. These misconceptions can significantly derail organizational efforts to adequately protect data across environments. These statistics can be exciting yet overwhelming at the same time. The first misconception, “We do not manage or store confidential/personally identifiable information (PII)” is dangerous as it implies the organization does not have proper data literacy. Enterprise employees will zero in on the aspect of PII while neglecting trade secret theft and the complete breakdown of information sharing. To circumvent the first bullet point, the second bullet point forges an ideology that “Reliance on Third-Party vendor security” will absolve the company from security risk. Instead, third-party risk has grown over the last two years and is one of the major causes of data security breaches. It is important to understand that a holistic approach should be considered when protecting data which should not involve purchasing a Data Loss Prevention (DLP) tool. A tool is not a solution. To protect supply chain data, start by providing data literacy training to all employees and negotiating the security component of contracts with vendors to highlight data literacy training for individuals/teams that may access company data. It is also important to understand the origin of the data and its movement to include risk identification. Ensure processes effectively incorporate data security principles. Evaluate and select DLP solutions to address specific concerns/use cases in conjunction with data visibility. These approaches are part of a broader solutions framework called Data Security Assurance (DSA). The DSA Framework looks at all of the processes across the supply chain, including their corresponding architecture and workflows, employee data literacy, governance and controls, integration between third and fourth-party vendors, DLP as a solution concept, and policies related to data residency. Within cloud environments, this framework is crucial for the supply chain industry to avoid regulatory implications and third/fourth party risk.Keywords: security by design, data security architecture, cybersecurity framework, data security assurance
Procedia PDF Downloads 88342 Effect of Antimony on Microorganisms in Aerobic and Anaerobic Environments
Authors: Barrera C. Monserrat, Sierra-Alvarez Reyes, Pat-Espadas Aurora, Moreno Andrade Ivan
Abstract:
Antimony is a toxic and carcinogenic metalloid considered a pollutant of priority interest by the United States Environmental Protection Agency. It is present in the environment in two oxidation states: antimonite (Sb (III)) and antimony (Sb (V)). Sb (III) is toxic to several aquatic organisms, but the potential inhibitory effect of Sb species for microorganisms has not been extensively evaluated. The fate and possible toxic impact of antimony on aerobic and anaerobic wastewater treatment systems are unknown. For this reason, the objective of this study was to evaluate the microbial toxicity of Sb (V) and Sb (III) in aerobic and anaerobic environments. Sb(V) and Sb(III) were used as potassium hexahydroxoantimonate (V) and potassium antimony tartrate, respectively (Sigma-Aldrich). The toxic effect of both Sb species in anaerobic environments was evaluated on methanogenic activity and the inhibition of hydrogen production of microorganisms from a wastewater treatment bioreactor. For the methanogenic activity, batch experiments were carried out in 160 mL serological bottles; each bottle contained basal mineral medium (100 mL), inoculum (1.5 g of VSS/L), acetate (2.56 g/L) as substrate, and variable concentrations of Sb (V) or Sb (III). Duplicate bioassays were incubated at 30 ± 2°C on an orbital shaker (105 rpm) in the dark. Methane production was monitored by gas chromatography. The hydrogen production inhibition tests were carried out in glass bottles with a working volume of 0.36 L. Glucose (50 g/L) was used as a substrate, pretreated inoculum (5 g VSS/L), mineral medium and varying concentrations of the two species of antimony. The bottles were kept under stirring and at a temperature of 35°C in an AMPTSII device that recorded hydrogen production. The toxicity of Sb on aerobic microorganisms (from a wastewater activated sludge treatment plant) was tested with a Microtox standardized toxicity test and respirometry. Results showed that Sb (III) is more toxic than Sb (V) for methanogenic microorganisms. Sb (V) caused a 50% decrease in methanogenic activity at 250 mg/L. In contrast, exposure to Sb (III) resulted in a 50% inhibition at a concentration of only 11 mg/L, and an almost complete inhibition (95%) at 25 mg/L. For hydrogen-producing microorganisms, Sb (III) and Sb (V) inhibited 50% of this production with 12.6 mg/L and 87.7 mg/L, respectively. The results for aerobic environments showed that 500 mg/L of Sb (V) do not inhibit the Allivibrio fischeri (Microtox) activity or specific oxygen uptake rate of activated sludge. In the case of Sb (III), this caused a loss of 50% of the respiration of the microorganisms at concentrations below 40 mg/L. The results obtained indicate that the toxicity of the antimony will depend on the speciation of this metalloid and that Sb (III) has a significantly higher inhibitory potential compared to Sb (V). It was shown that anaerobic microorganisms can reduce Sb (V) to Sb (III). Acknowledgments: This work was funded in part by grants from the UA-CONACYT Binational Consortium for the Regional Scientific Development and Innovation (CAZMEX), the National Institute of Health (NIH ES- 04940), and PAPIIT-DGAPA-UNAM (IN105220).Keywords: aerobic inhibition, antimony reduction, hydrogen inhibition, methanogenic toxicity
Procedia PDF Downloads 164341 An Appraisal of Mitigation and Adaptation Measures under Paris Agreement 2015: Developing Nations' Pie
Authors: Olubisi Friday Oluduro
Abstract:
The Paris Agreement 2015, the result of negotiations under the United Nations Framework Convention on Climate Change (UNFCCC), after Kyoto Protocol expiration, sets a long-term goal of limiting the increase in the global average temperature to well below 2 degrees Celsius above pre-industrial levels, and of pursuing efforts to limiting this temperature increase to 1.5 degrees Celsius. An advancement on the erstwhile Kyoto Protocol which sets commitments to only a limited number of Parties to reduce their greenhouse gas (GHGs) emissions, it includes the goal to increase the ability to adapt to the adverse impacts of climate change and to make finance flows consistent with a pathway towards low GHGs emissions. For it achieve these goals, the Agreement requires all Parties to undertake efforts towards reaching global peaking of GHG emissions as soon as possible and towards achieving a balance between anthropogenic emissions by sources and removals by sinks in the second half of the twenty-first century. In addition to climate change mitigation, the Agreement aims at enhancing adaptive capacity, strengthening resilience and reducing the vulnerability to climate change in different parts of the world. It acknowledges the importance of addressing loss and damage associated with the adverse of climate change. The Agreement also contains comprehensive provisions on support to be provided to developing countries, which includes finance, technology transfer and capacity building. To ensure that such supports and actions are transparent, the Agreement contains a number reporting provisions, requiring parties to choose the efforts and measures that mostly suit them (Nationally Determined Contributions), providing for a mechanism of assessing progress and increasing global ambition over time by a regular global stocktake. Despite the somewhat global look of the Agreement, it has been fraught with manifold limitations threatening its very existential capability to produce any meaningful result. Considering these obvious limitations some of which were the very cause of the failure of its predecessor—the Kyoto Protocol—such as the non-participation of the United States, non-payment of funds into the various coffers for appropriate strategic purposes, among others. These have left the developing countries largely threatened eve the more, being more vulnerable than the developed countries, which are really responsible for the climate change scourge. The paper seeks to examine the mitigation and adaptation measures under the Paris Agreement 2015, appraise the present situation since the Agreement was concluded and ascertain whether the developing countries have been better or worse off since the Agreement was concluded, and examine why and how, while projecting a way forward in the present circumstance. It would conclude with recommendations towards ameliorating the situation.Keywords: mitigation, adaptation, climate change, Paris agreement 2015, framework
Procedia PDF Downloads 156340 Signal Transduction in a Myenteric Ganglion
Authors: I. M. Salama, R. N. Miftahof
Abstract:
A functional element of the myenteric nervous plexus is a morphologically distinct ganglion. Composed of sensory, inter- and motor neurons and arranged via synapses in neuronal circuits, their task is to decipher and integrate spike coded information within the plexus into regulatory output signals. The stability of signal processing in response to a wide range of internal/external perturbations depends on the plasticity of individual neurons. Any aberrations in this inherent property may lead to instability with the development of a dynamics chaos and can be manifested as pathological conditions, such as intestinal dysrhythmia, irritable bowel syndrome. The aim of this study is to investigate patterns of signal transduction within a two-neuronal chain - a ganglion - under normal physiological and structurally altered states. The ganglion contains the primary sensory (AH-type) and motor (S-type) neurons linked through a cholinergic dendro somatic synapse. The neurons have distinguished electrophysiological characteristics including levels of the resting and threshold membrane potentials and spiking activity. These are results of ionic channel dynamics namely: Na+, K+, Ca++- activated K+, Ca++ and Cl-. Mechanical stretches of various intensities and frequencies are applied at the receptive field of the AH-neuron generate a cascade of electrochemical events along the chain. At low frequencies, ν < 0.3 Hz, neurons demonstrate strong connectivity and coherent firing. The AH-neuron shows phasic bursting with spike frequency adaptation while the S-neuron responds with tonic bursts. At high frequency, ν > 0.5 Hz, the pattern of electrical activity changes to rebound and mixed mode bursting, respectively, indicating ganglionic loss of plasticity and adaptability. A simultaneous increase in neuronal conductivity for Na+, K+ and Ca++ ions results in tonic mixed spiking of the sensory neuron and class 2 excitability of the motor neuron. Although the signal transduction along the chain remains stable the synchrony in firing pattern is not maintained and the number of discharges of the S-type neuron is significantly reduced. A concomitant increase in Ca++- activated K+ and a decrease in K+ in conductivities re-establishes weak connectivity between the two neurons and converts their firing pattern to a bistable mode. It is thus demonstrated that neuronal plasticity and adaptability have a stabilizing effect on the dynamics of signal processing in the ganglion. Functional modulations of neuronal ion channel permeability, achieved in vivo and in vitro pharmacologically, can improve connectivity between neurons. These findings are consistent with experimental electrophysiological recordings from myenteric ganglia in intestinal dysrhythmia and suggest possible pathophysiological mechanisms.Keywords: neuronal chain, signal transduction, plasticity, stability
Procedia PDF Downloads 390339 Design, Simulation and Construction of 2.4GHz Microstrip Patch Antenna for Improved Wi-Fi Reception
Authors: Gabriel Ugalahi, Dominic S. Nyitamen
Abstract:
This project seeks to improve Wi-Fi reception by utilizing the properties of directional microstrip patch antennae. Where there is a dense population of Wi-Fi signal, several signal sources transmitting on the same frequency band and indeed channel constitutes interference to each other. The time it takes for request to be received, resolved and response given between a user and the resource provider is increased considerably. By deploying a directional patch antenna with a narrow bandwidth, the range of frequency received is reduced and should help in limiting the reception of signal from unwanted sources. A rectangular microstrip patch antenna (RMPA) is designed to operate at the Industrial Scientific and Medical (ISM) band (2.4GHz) commonly used in Wi-Fi network deployment. The dimensions of the antenna are calculated and these dimensions are used to generate a model on Advanced Design System (ADS), a microwave simulator. Simulation results are then analyzed and necessary optimization is carried out to further enhance the radiation quality so as to achieve desired results. Impedance matching at 50Ω is also obtained by using the inset feed method. Final antenna dimensions obtained after simulation and optimization are then used to implement practical construction on an FR-4 double sided copper clad printed circuit board (PCB) through a chemical etching process using ferric chloride (Fe2Cl). Simulation results show an RMPA operating at a centre frequency of 2.4GHz with a bandwidth of 40MHz. A voltage standing wave ratio (VSWR) of 1.0725 is recorded on a return loss of -29.112dB at input port showing an appreciable match in impedance to a source of 50Ω. In addition, a gain of 3.23dBi and directivity of 6.4dBi is observed during far-field analysis. On deployment, signal reception from wireless devices is improved due to antenna gain. A test source with a received signal strength indication (RSSI) of -80dBm without antenna installed on the receiver was improved to an RSSI of -61dBm. In addition, the directional radiation property of the RMPA prioritizes signals by pointing in the direction of a preferred signal source thus, reducing interference from undesired signal sources. This was observed during testing as rotation of the antenna on its axis resulted to the gain of signal in-front of the patch and fading of signals away from the front.Keywords: advanced design system (ADS), inset feed, received signal strength indicator (RSSI), rectangular microstrip patch antenna (RMPA), voltage standing wave ratio (VSWR), wireless fidelity (Wi-Fi)
Procedia PDF Downloads 220338 (Re)Processing of ND-Fe-B Permanent Magnets Using Electrochemical and Physical Approaches
Authors: Kristina Zuzek, Xuan Xu, Awais Ikram, Richard Sheridan, Allan Walton, Saso Sturm
Abstract:
Recycling of end-of-life REEs based Nd-Fe-B magnets is an important strategy for reducing the environmental dangers associated with rare-earth mining and overcoming the well-documented supply risks related to the REEs. However, challenges on their reprocessing still remain. We report on the possibility of direct electrochemical recycling and reprocessing of Nd-Fe(B)-based magnets. In this investigation, we were able first to electrochemically leach the end-of-life NdFeB magnet and to electrodeposit Nd–Fe using a 1-ethyl-3-methyl imidazolium dicyanamide ([EMIM][DCA]) ionic liquid-based electrolyte. We observed that Nd(III) could not be reduced independently. However, it can be co-deposited on a substrate with the addition of Fe(II). Using advanced TEM techniques of electron-energy-loss spectroscopy (EELS) it was shown that Nd(III) is reduced to Nd(0) during the electrodeposition process. This gave a new insight into determining the Nd oxidation state, as X-ray photoelectron spectroscopy (XPS) has certain limitations. This is because the binding energies of metallic Nd (Nd0) and neodymium oxide (Nd₂O₃) are very close, i. e., 980.5-981.5 eV and 981.7-982.3 eV, respectively, making it almost impossible to differentiate between the two states. These new insights into the electrodeposition process represent an important step closer to efficient recycling of rare piles of earth in metallic form at mild temperatures, thus providing an alternative to high-temperature molten-salt electrolysis and a step closer to deposit Nd-Fe-based magnetic materials. Further, we propose a new concept of recycling the sintered Nd-Fe-B magnets by direct recovering the 2:14:1 matrix phase. Via an electrochemical etching method, we are able to recover pure individual 2:14:1 grains that can be re-used for new types of magnet production. In the frame of physical reprocessing, we have successfully synthesized new magnets out of hydrogen (HDDR)-recycled stocks with a contemporary technique of pulsed electric current sintering (PECS). The optimal PECS conditions yielded fully dense Nd-Fe-B magnets with the coercivity Hc = 1060 kA/m, which was boosted to 1160 kA/m after the post-PECS thermal treatment. The Br and Hc were tackled further and increased applied pressures of 100 – 150 MPa resulted in Br = 1.01 T. We showed that with a fine tune of the PECS and post-annealing it is possible to revitalize the Nd-Fe-B end-of-life magnets. By applying advanced TEM, i.e. atomic-scale Z-contrast STEM combined with EDXS and EELS, the resulting magnetic properties were critically assessed against various types of structural and compositional discontinuities down to atomic-scale, which we believe control the microstructure evolution during the PECS processing route.Keywords: electrochemistry, Nd-Fe-B, pulsed electric current sintering, recycling, reprocessing
Procedia PDF Downloads 155337 Institutional and Economic Determinants of Foreign Direct Investment: Comparative Analysis of Three Clusters of Countries
Authors: Ismatilla Mardanov
Abstract:
There are three types of countries, the first of which is willing to attract foreign direct investment (FDI) in enormous amounts and do whatever it takes to make this happen. Therefore, FDI pours into such countries. In the second cluster of countries, even if the country is suffering tremendously from the shortage of investments, the governments are hesitant to attract investments because they are at the hands of local oligarchs/cartels. Therefore, FDI inflows are moderate to low in such countries. The third type is countries whose companies prefer investing in the most efficient locations globally and are hesitant to invest in the homeland. Sorting countries into such clusters, the present study examines the essential institutions and economic factors that make these countries different. Past literature has discussed various determinants of FDI in all kinds of countries. However, it did not classify countries based on government motivation, institutional setup, and economic factors. A specific approach to each target country is vital for corporate foreign direct investment risk analysis and decisions. The research questions are 1. What specific institutional and economic factors paint the pictures of the three clusters; 2. What specific institutional and economic factors are determinants of FDI; 3. Which of the determinants are endogenous and exogenous variables? 4. How can institutions and economic and political variables impact corporate investment decisions Hypothesis 1: In the first type, country institutions and economic factors will be favorable for FDI. Hypothesis 2: In the second type, even if country economic factors favor FDI, institutions will not. Hypothesis 3: In the third type, even if country institutions favorFDI, economic factors will not favor domestic investments. Therefore, FDI outflows occur in large amounts. Methods: Data come from open sources of the World Bank, the Fraser Institute, the Heritage Foundation, and other reliable sources. The dependent variable is FDI inflows. The independent variables are institutions (economic and political freedom indices) and economic factors (natural, material, and labor resources, government consumption, infrastructure, minimum wage, education, unemployment, tax rates, consumer price index, inflation, and others), the endogeneity or exogeneity of which are tested in the instrumental variable estimation. Political rights and civil liberties are used as instrumental variables. Results indicate that in the first type, both country institutions and economic factors, specifically labor and logistics/infrastructure/energy intensity, are favorable for potential investors. In the second category of countries, the risk of loss of assets is very high due to governmentshijacked by local oligarchs/cartels/special interest groups. In the third category of countries, the local economic factors are unfavorable for domestic investment even if the institutions are well acceptable. Cluster analysis and instrumental variable estimation were used to reveal cause-effect patterns in each of the clusters.Keywords: foreign direct investment, economy, institutions, instrumental variable estimation
Procedia PDF Downloads 159336 Green Building Risks: Limits on Environmental and Health Quality Metrics for Contractors
Authors: Erica Cochran Hameen, Bobuchi Ken-Opurum, Mounica Guturu
Abstract:
The United Stated (U.S.) populous spends the majority of their time indoors in spaces where building codes and voluntary sustainability standards provide clear Indoor Environmental Quality (IEQ) metrics. The existing sustainable building standards and codes are aimed towards improving IEQ, health of occupants, and reducing the negative impacts of buildings on the environment. While they address the post-occupancy stage of buildings, there are fewer standards on the pre-occupancy stage thereby placing a large labor population in environments much less regulated. Construction personnel are often exposed to a variety of uncomfortable and unhealthy elements while on construction sites, primarily thermal, visual, acoustic, and air quality related. Construction site power generators, equipment, and machinery generate on average 9 decibels (dBA) above the U.S. OSHA regulations, creating uncomfortable noise levels. Research has shown that frequent exposure to high noise levels leads to chronic physiological issues and increases noise induced stress, yet beyond OSHA no other metric focuses directly on the impacts of noise on contractors’ well-being. Research has also associated natural light with higher productivity and attention span, and lower cases of fatigue in construction workers. However, daylight is not always available as construction workers often perform tasks in cramped spaces, dark areas, or at nighttime. In these instances, the use of artificial light is necessary, yet lighting standards for use during lengthy tasks and arduous activities is not specified. Additionally, ambient air, contaminants, and material off-gassing expelled at construction sites are one of the causes of serious health effects in construction workers. Coupled with extreme hot and cold temperatures for different climate zones, health and productivity can be seriously compromised. This research evaluates the impact of existing green building metrics on construction and risk management, by analyzing two codes and nine standards including LEED, WELL, and BREAM. These metrics were chosen based on the relevance to the U.S. construction industry. This research determined that less than 20% of the sustainability context within the standards and codes (texts) are related to the pre-occupancy building sector. The research also investigated the impact of construction personnel’s health and well-being on construction management through two surveys of project managers and on-site contractors’ perception of their work environment on productivity. To fully understand the risks of limited Environmental and Health Quality metrics for contractors (EHQ) this research evaluated the connection between EHQ factors such as inefficient lighting, on construction workers and investigated the correlation between various site coping strategies for comfort and productivity. Outcomes from this research are three-pronged. The first includes fostering a discussion about the existing conditions of EQH elements, i.e. thermal, lighting, ergonomic, acoustic, and air quality on the construction labor force. The second identifies gaps in sustainability standards and codes during the pre-occupancy stage of building construction from ground-breaking to substantial completion. The third identifies opportunities for improvements and mitigation strategies to improve EQH such as increased monitoring of effects on productivity and health of contractors and increased inclusion of the pre-occupancy stage in green building standards.Keywords: construction contractors, health and well-being, environmental quality, risk management
Procedia PDF Downloads 131335 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine
Authors: D. Madhushanka, Y. Liu, H. C. Fernando
Abstract:
Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2
Procedia PDF Downloads 233334 Investigation Studies of WNbMoVTa and WNbMoVTaCr₀.₅Al Refractory High Entropy Alloys as Plasma-Facing Materials
Authors: Burçak Boztemur, Yue Xu, Laima Luo, M. Lütfi Öveçoğlu, Duygu Ağaoğulları
Abstract:
Tungsten (W) is used chiefly as plasma-facing material. However, it has some problems, such as brittleness after plasma exposure. High-entropy alloys (RHEAs) are a new opportunity for this deficiency. So, the neutron shielding behavior of WNbMoVTa and WNbMoVTaCr₀.₅Al compositions were examined against He⁺ irradiation in this study. The mechanical and irradiation properties of the WNbMoVTa base composition were investigated by adding the Al and Cr elements. The mechanical alloying (MA) for 6 hours was applied to obtain RHEA powders. According to the X-ray diffraction (XRD) method, the body-centered cubic (BCC) phase and NbTa phase with a small amount of WC impurity that comes from vials and balls were determined after 6 h MA. Also, RHEA powders were consolidated with the spark plasma sintering (SPS) method (1500 ºC, 30 MPa, and 10 min). After the SPS method, (Nb,Ta)C and W₂C₀.₈₅ phases were obtained with the decomposition of WC and stearic acid that is added during MA based on XRD results. Also, the BCC phase was obtained for both samples. While the Al₂O₃ phase with a small intensity was seen for the WNbMoVTaCr₀.₅Al sample, the Ta₂VO₆ phase was determined for the base sample. These phases were observed as three different regions according to scanning electron microscopy (SEM). All elements were distributed homogeneously on the white region by measuring an electron probe micro-analyzer (EPMA) coupled with a wavelength dispersive spectroscope (WDS). Also, the grey region of the WNbMoVTa sample was rich in Ta, V, and O elements. However, the amount of Al and O elements was higher for the grey region of the WNbMoVTaCr₀.₅Al sample. The high amount of Nb, Ta, and C elements were determined for both samples. Archimedes’ densities that were measured with alcohol media were closer to the theoretical densities of RHEAs. These values were important for the microhardness and irradiation resistance of compositions. While the Vickers microhardness value of the WNbMoVTa sample was measured as ~11 GPa, this value increased to nearly 13 GPa with the WNbMoVTaCr₀.₅Al sample. These values were compatible with the wear behavior. The wear volume loss was decreased to 0.16×10⁻⁴ from 1.25×10⁻⁴ mm³ by the addition of Al and Cr elements to the WNbMoVTa. The He⁺ irradiation was conducted on the samples to observe surface damage. After irradiation, the XRD patterns were shifted to the left because of defects and dislocations. He⁺ ions were infused under the surface, so they created the lattice expansion. The peak shifting of the WNbMoVTaCr₀.₅Al sample was less than the WNbMoVTa base sample, thanks to less impact. A small amount of fuzz was observed for the base sample. This structure was removed and transformed into a wavy structure with the addition of Cr and Al elements. Also, the deformation hardening was actualized after irradiation. A lower amount of hardening was obtained with the WNbMoVTaCr₀.₅Al sample based on the changing microhardness values. The surface deformation was decreased in the WNbMoVTaCr₀.₅Al sample.Keywords: refractory high entropy alloy, microhardness, wear resistance, He⁺ irradiation
Procedia PDF Downloads 64333 Deforestation, Vulnerability and Adaptation Strategies of Rural Farmers: The Case of Central Rift Valley Region of Ethiopia
Authors: Dembel Bonta Gebeyehu
Abstract:
In the study area, the impacts of deforestation for environmental degradation and livelihood of farmers manifest in different faces. They are more vulnerable as they depend on rain-fed agriculture and immediate natural forests. On the other hand, after planting seedling, waste disposal and management system of the plastic cover is poorly practiced and administered in the country in general and in the study area in particular. If this situation continues, the plastic waste would also accentuate land degradation. Besides, there is the absence of empirical studies conducted comprehensively on the research under study the case. The results of the study could suffice to inform any intervention schemes or to contribute to the existing knowledge on these issues. The study employed a qualitative approach based on intensive fieldwork data collected via various tools namely open-ended interviews, focus group discussion, key-informant interview and non-participant observation. The collected data was duly transcribed and latter categorized into different labels based on pre-determined themes to make further analysis. The major causes of deforestation were the expansion of agricultural land, poor administration, population growth, and the absence of conservation methods. The farmers are vulnerable to soil erosion and soil infertility culminating in low agricultural production; loss of grazing land and decline of livestock production; climate change; and deterioration of social capital. Their adaptation and coping strategies include natural conservation measures, diversification of income sources, safety-net program, and migration. Due to participatory natural resource conservation measures, soil erosion has been decreased and protected, indigenous woodlands started to regenerate. These brought farmers’ attitudinal change. The existing forestation program has many flaws. Especially, after planting seedlings, there is no mechanism for the plastic waste disposal and management. It was also found out organizational challenges among the mandated offices In the studied area, deforestation is aggravated by a number of factors, which made the farmers vulnerable. The current forestation programs are not well-planned, implemented, and coordinated. Sustainable and efficient seedling plastic cover collection and reuse methods should be devised. This is possible through creating awareness, organizing micro and small enterprises to reuse, and generate income from the collected plastic etc.Keywords: land-cover and land-dynamics, vulnerability, adaptation strategy, mitigation strategies, sustainable plastic waste management
Procedia PDF Downloads 387332 Celebrity Culture and Social Role of Celebrities in Türkiye during the 1990s: The Case of Türkiye, Newspaper, Radio, Televison (TGRT) Channel
Authors: Yelda Yenel, Orkut Acele
Abstract:
In a media-saturated world, celebrities have become ubiquitous figures, encountered both in public spaces and within the privacy of our homes, seamlessly integrating into daily life. From Alexander the Great to contemporary media personalities, the image of celebrity has persisted throughout history, manifesting in various forms and contexts. Over time, as the relationship between society and the market evolved, so too did the roles and behaviors of celebrities. These transformations offer insights into the cultural climate, revealing shifts in habits and worldviews. In Türkiye, the emergence of private television channels brought an influx of celebrities into everyday life, making them a pervasive part of daily routines. To understand modern celebrity culture, it is essential to examine the ideological functions of media within political, economic, and social contexts. Within this framework, celebrities serve as both reflections and creators of cultural values and, at times, act as intermediaries, offering insights into the society of their era. Starting its broadcasting life in 1992 with religious films and religious conversation, Türkiye Newspaper, Radio, Television channel (TGRT) later changed its appearance, slogan, and the celebrities it featured in response to the political atmosphere. Celebrities played a critical role in transforming from the existing slogan 'Peace has come to the screen' to 'Watch and see what will happen”. Celebrities hold significant roles in society, and their images are produced and circulated by various actors, including media organizations and public relations teams. Understanding these dynamics is crucial for analyzing their influence and impact. This study aims to explore Turkish society in the 1990s, focusing on TGRT and its visual and discursive characteristics regarding celebrity figures such as Seda Sayan. The first section examines the historical development of celebrity culture and its transformations, guided by the conceptual framework of celebrity studies. The complex and interconnected image of celebrity, as introduced by post-structuralist approaches, plays a fundamental role in making sense of existing relationships. This section traces the existence and functions of celebrities from antiquity to the present day. The second section explores the economic, social, and cultural contexts of 1990s Türkiye, focusing on the media landscape and visibility that became prominent in the neoliberal era following the 1980s. This section also discusses the political factors underlying TGRT's transformation, such as the 1997 military memorandum. The third section analyzes TGRT as a case study, focusing on its significance as an Islamic television channel and the shifts in its public image, categorized into two distinct periods. The channel’s programming, which aligned with Islamic teachings, and the celebrities who featured prominently during these periods became the public face of both TGRT and the broader society. In particular, the transition to a more 'secular' format during TGRT's second phase is analyzed, focusing on changes in celebrity attire and program formats. This study reveals that celebrities are used as indicators of ideology, benefiting from this instrumentalization by enhancing their own fame and reflecting the prevailing cultural hegemony in society.Keywords: celebrity culture, media, neoliberalism, TGRT
Procedia PDF Downloads 28331 Emotional State and Cognitive Workload during a Flight Simulation: Heart Rate Study
Authors: Damien Mouratille, Antonio R. Hidalgo-Muñoz, Nadine Matton, Yves Rouillard, Mickael Causse, Radouane El Yagoubi
Abstract:
Background: The monitoring of the physiological activity related to mental workload (MW) on pilots will be useful to improve aviation safety by anticipating human performance degradation. The electrocardiogram (ECG) can reveal MW fluctuations due to either cognitive workload or/and emotional state since this measure exhibits autonomic nervous system modulations. Arguably, heart rate (HR) is one of its most intuitive and reliable parameters. It would be particularly interesting to analyze the interaction between cognitive requirements and emotion in ecologic sets such as a flight simulator. This study aims to explore by means of HR the relation between cognitive demands and emotional activation. Presumably, the effects of cognition and emotion overloads are not necessarily cumulative. Methodology: Eight healthy volunteers in possession of the Private Pilot License were recruited (male; 20.8±3.2 years). ECG signal was recorded along the whole experiment by placing two electrodes on the clavicle and left pectoral of the participants. The HR was computed within 4 minutes segments. NASA-TLX and Big Five inventories were used to assess subjective workload and to consider the influence of individual personality differences. The experiment consisted in completing two dual-tasks of approximately 30 minutes of duration into a flight simulator AL50. Each dual-task required the simultaneous accomplishment of both a pre-established flight plan and an additional task based on target stimulus discrimination inserted between Air Traffic Control instructions. This secondary task allowed us to vary the cognitive workload from low (LC) to high (HC) levels, by combining auditory and visual numerical stimuli to respond to meeting specific criteria. Regarding emotional condition, the two dual-tasks were designed to assure analogous difficulty in terms of solicited cognitive demands. The former was realized by the pilot alone, i.e. Low Arousal (LA) condition. In contrast, the latter generates a high arousal (HA), since the pilot was supervised by two evaluators, filmed and involved into a mock competition with the rest of the participants. Results: Performance for the secondary task showed significant faster reaction times (RT) for HA compared to LA condition (p=.003). Moreover, faster RT was found for LC compared to HC (p < .001) condition. No interaction was found. Concerning HR measure, despite the lack of main effects an interaction between emotion and cognition is evidenced (p=.028). Post hoc analysis showed smaller HR for HA compared to LA condition only for LC (p=.049). Conclusion. The control of an aircraft is a very complex task including strong cognitive demands and depends on the emotional state of pilots. According to the behavioral data, the experimental set has permitted to generate satisfactorily different emotional and cognitive levels. As suggested by the interaction found in HR measure, these two factors do not seem to have a cumulative impact on the sympathetic nervous system. Apparently, low cognitive workload makes pilots more sensitive to emotional variations. These results hint the independency between data processing and emotional regulation. Further physiological data are necessary to confirm and disentangle this relation. This procedure may be useful for monitoring objectively pilot’s mental workload.Keywords: cognitive demands, emotion, flight simulator, heart rate, mental workload
Procedia PDF Downloads 273330 Diagenesis of the Permian Ecca Sandstones and Mudstones, in the Eastern Cape Province, South Africa: Implications for the Shale Gas Potential of the Karoo Basin
Authors: Temitope L. Baiyegunhi, Christopher Baiyegunhi, Kuiwu Liu, Oswald Gwavava
Abstract:
Diagenesis is the most important factor that affects or impact the reservoir property. Despite the fact that published data gives a vast amount of information on the geology, sedimentology and lithostratigraphy of the Ecca Group in the Karoo Basin of South Africa, little is known of the diagenesis of the potentially feasible shales and sandstones of the Ecca Group. The study aims to provide a general account of the diagenesis of sandstones and mudstone of the Ecca Group. Twenty-five diagenetic textures and structures are identified and grouped into three regimes or stages that include eogenesis, mesogenesis and telogenesis. Clay minerals are the most common cementing materials in the Ecca sandstones and mudstones. Smectite, kaolinite and illite are the major clay minerals that act as pore lining rims and pore-filling cement. Most of the clay minerals and detrital grains were seriously attacked and replaced by calcite. Calcite precipitates locally in pore spaces and partly or completely replaced feldspar and quartz grains, mostly at their margins. Precipitation of cements and formation of pyrite and authigenic minerals as well as little lithification occurred during the eogenesis. This regime was followed by mesogenesis which brought about an increase in tightness of grain packing, loss of pore spaces and thinning of beds due to weight of overlying sediments and selective dissolution of framework grains. Compaction, mineral overgrowths, mineral replacement, clay-mineral authigenesis, deformation and pressure solution structures occurred during mesogenesis. During rocks were uplifted, weathered and unroofed by erosion, this resulted in additional grain fracturing, decementation and oxidation of iron-rich volcanic fragments and ferromagnesian minerals. The rocks of Ecca Group were subjected to moderate-intense mechanical and chemical compaction during its progressive burial. Intergranular pores, matrix micro pores, secondary intragranular, dissolution and fractured pores are the observed pores. The presence of fractured and dissolution pores tend to enhance reservoir quality. However, the isolated nature of the pores makes them unfavourable producers of hydrocarbons, which at best would require stimulation. The understanding of the space and time distribution of diagenetic processes in these rocks will allow the development of predictive models of their quality, which may contribute to the reduction of risks involved in their exploration.Keywords: diagenesis, reservoir quality, Ecca Group, Karoo Supergroup
Procedia PDF Downloads 147329 Environmental Threats and Great Barrier Reef: A Vulnerability Assessment of World’s Best Tropical Marine Ecosystems
Authors: Ravi Kant Anand, Nikkey Keshri
Abstract:
The Great Barrier Reef of Australia is known for its beautiful landscapes and seascapes with ecological importance. This site was selected as a World Heritage site in 1981 and popularized internationally for tourism, recreational activities and fishing. But the major environmental hazards such as climate change, pollution, overfishing and shipping are making worst the site of marine ecosystem. Climate change is directly hitting on Great Barrier Reef through increasing level of sea, acidification of ocean, increasing in temperature, uneven precipitation, changes in the El Nino and increasing level of cyclones and storms. Apart from that pollution is second biggest factor which vanishing the coral reef ecosystem. Pollution including over increasement of pesticides and chemicals, eutrophication, pollution through mining, sediment runoff, loss of coastal wetland and oil spills. Coral bleaching is the biggest problem because of the environmental threatening agents. Acidification of ocean water reduced the formation of calcium carbonate skeleton. The floral ecosystem (including sea grasses and mangroves) of ocean water is the key source of food for fishes and other faunal organisms but the powerful waves, extreme temperature, destructive storms and river run- off causing the threat for them. If one natural system is under threat, it means the whole marine food web is affected from algae to whale. Poisoning of marine water through different polluting agents have been affecting the production of corals, breeding of fishes, weakening of marine health and increased in death of fishes and corals. In lieu of World Heritage site, tourism sector is directly affected and causing increasement in unemployment. Fishing sector also affected. Fluctuation in the temperature of ocean water affects the production of corals because it needs desolate place, proper sunlight and temperature up to 21 degree centigrade. But storms, El Nino, rise in temperature and sea level are induced for continuous reduction of the coral production. If we do not restrict the environmental problems of Great Barrier Reef than the best known ecological beauty with coral reefs, pelagic environments, algal meadows, coasts and estuaries, mangroves forests and sea grasses, fish species, coral gardens and the one of the best tourist spots will lost in upcoming years. My research will focus on the different environmental threats, its socio-economic impacts and different conservative measures.Keywords: climate change, overfishing, acidification, eutrophication
Procedia PDF Downloads 373328 Investigating Sediment-Bound Chemical Transport in an Eastern Mediterranean Perennial Stream to Identify Priority Pollution Sources on a Catchment Scale
Authors: Felicia Orah Rein Moshe
Abstract:
Soil erosion has become a priority global concern, impairing water quality and degrading ecosystem services. In Mediterranean climates, following a long dry period, the onset of rain occurs when agricultural soils are often bare and most vulnerable to erosion. Early storms transport sediments and sediment-bound pollutants into streams, along with dissolved chemicals. This results in loss of valuable topsoil, water quality degradation, and potentially expensive dredged-material disposal costs. Information on the provenance of fine sediment and priority sources of adsorbed pollutants represents a critical need for developing effective control strategies aimed at source reduction. Modifying sediment traps designed for marine systems, this study tested a cost-effective method to collect suspended sediments on a catchment scale to characterize stream water quality during first-flush storm events in a flashy Eastern Mediterranean coastal perennial stream. This study investigated the Kishon Basin, deploying sediment traps in 23 locations, including 4 in the mainstream and one downstream in each of 19 tributaries, enabling the characterization of sediment as a vehicle for transporting chemicals. Further, it enabled direct comparison of sediment-bound pollutants transported during the first-flush winter storms of 2020 from each of 19 tributaries, allowing subsequent ecotoxicity ranking. Sediment samples were successfully captured in 22 locations. Pesticides, pharmaceuticals, nutrients, and metal concentrations were quantified, identifying a total of 50 pesticides, 15 pharmaceuticals, and 22 metals, with 16 pesticides and 3 pharmaceuticals found in all 23 locations, demonstrating the importance of this transport pathway. Heavy metals were detected in only one tributary, identifying an important watershed pollution source with immediate potential influence on long-term dredging costs. Simultaneous sediment sampling at first flush storms enabled clear identification of priority tributaries and their chemical contributions, advancing a new national watershed monitoring approach, facilitating strategic plan development based on source reduction, and advancing the goal of improving the farm-stream interface, conserving soil resources, and protecting water quality.Keywords: adsorbed pollution, dredged material, heavy metals, suspended sediment, water quality monitoring
Procedia PDF Downloads 106327 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes
Authors: Mohsen Hababalahi, Morteza Bastami
Abstract:
Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method
Procedia PDF Downloads 510326 Vertebral Artery Dissection Complicating Pregnancy and Puerperium: Case Report and Review of the Literature
Authors: N. Reza Pour, S. Chuah, T. Vo
Abstract:
Background: Vertebral artery dissection (VAD) is a rare complication of pregnancy. It can occur spontaneously or following a traumatic event. The pathogenesis is unclear. Predisposing factors include chronic hypertension, Marfan’s syndrome, fibromuscular dysplasia, vasculitis and cystic medial necrosis. Physiological changes of pregnancy have also been proposed as potential mechanisms of injury to the vessel wall. The clinical presentation varies and it can present as a headache, neck pain, diplopia, transient ischaemic attack, or an ischemic stroke. Isolated cases of VAD in pregnancy and puerperium have been reported in the literature. One case was found to have posterior circulation stroke as a result of bilateral VAD and labour was induced at 37 weeks gestation for preeclampsia. Another patient at 38 weeks with severe neck pain that persisted after induction for elevated blood pressure and arteriography showed right VAD postpartum. A single case of lethal VAD in pregnancy with subsequent massive subarachnoid haemorrhage has been reported which was confirmed by the autopsy. Case Presentation: We report two cases of vertebral artery dissection in pregnancy. The first patient was a 32-year-old primigravida presented at the 38th week of pregnancy with the onset of early labour and blood pressure (BP) of 130/70 on arrival. After 2 hours, the patient developed a severe headache with blurry vision and BP was 238/120. Despite treatment with an intravenous antihypertensive, she had eclamptic fit. Magnesium solfate was started and Emergency Caesarean Section was performed under the general anaesthesia. On the second day after the operation, she developed left-sided neck pain. Magnetic Resonance Imaging (MRI) angiography confirmed a short segment left vertebral artery dissection at the level of C3. The patient was treated with aspirin and remained stable without any neurological deficit. The second patient was a 33-year-old primigavida who was admitted to the hospital at 36 weeks gestation with BP of 155/105, constant headache and visual disturbances. She was medicated with an oral antihypertensive agent. On day 4, she complained of right-sided neck pain. MRI angiogram revealed a short segment dissection of the right vertebral artery at the C2-3 level. Pregnancy was terminated on the same day with emergency Caesarean Section and anticoagulation was started subsequently. Post-operative recovery was complicated by rectus sheath haematoma requiring evacuation. She was discharged home on Aspirin without any neurological sequelae. Conclusion: Because of collateral circulation, unilateral vertebral artery dissections may go unrecognized and may be more common than suspected. The outcome for most patients is benign, reflecting the adequacy of the collateral circulation in young patients. Spontaneous VAD is usually treated with anticoagulation or antiplatelet therapy for a minimum of 3-6 months to prevent future ischaemic events, allowing the dissection to heal on its own. We had two cases of VAD in the context of hypertensive disorders of pregnancy with an acceptable outcome. A high level of vigilance is required particularly with preeclamptic patients presenting with head/neck pain to allow an early diagnosis. This is as we hypothesize, early and aggressive management of vertebral artery dissection may potentially prevent further complications.Keywords: eclampsia, preeclampsia, pregnancy, Vertebral Artery Dissection
Procedia PDF Downloads 275325 The Impact of Tourism on the Intangible Cultural Heritage of Pilgrim Routes: The Case of El Camino de Santiago
Authors: Miguel Angel Calvo Salve
Abstract:
This qualitative and quantitative study will identify the impact of tourism pressure on the intangible cultural heritage of the pilgrim route of El Camino de Santiago (Saint James Way) and propose an approach to a sustainable touristic model for these Cultural Routes. Since 1993, the Spanish Section of the Pilgrim Route of El Camino de Santiago has been on the World Heritage List. In 1994, the International Committee on Cultural Routes (CIIC-ICOMOS) initiated its work with the goal of studying, preserving, and promoting the cultural routes and their significance as a whole. Another ICOMOS group, the Charter on Cultural Routes, pointed out in 2008 the importance of both tangible and intangible heritage and the need for a holistic vision in preserving these important cultural assets. Tangible elements provide a physical confirmation of the existence of these cultural routes, while the intangible elements serve to give sense and meaning to it as a whole. Intangible assets of a Cultural Route are key to understanding the route's significance and its associated heritage values. Like many pilgrim routes, the Route to Santiago, as the result of a long evolutionary process, exhibits and is supported by intangible assets, including hospitality, cultural and religious expressions, music, literature, and artisanal trade, among others. A large increase in pilgrims walking the route, with very different aims and tourism pressure, has shown how the dynamic links between the intangible cultural heritage and the local inhabitants along El Camino are fragile and vulnerable. Economic benefits for the communities and population along the cultural routes are commonly fundamental for the micro-economies of the people living there, substituting traditional productive activities, which, in fact, modifies and has an impact on the surrounding environment and the route itself. Consumption of heritage is one of the major issues of sustainable preservation promoted with the intention of revitalizing those sites and places. The adaptation of local communities to new conditions aimed at preserving and protecting existing heritage has had a significant impact on immaterial inheritance. Based on questionnaires to pilgrims, tourists and local communities along El Camino during the peak season of the year, and using official statistics from the Galician Pilgrim’s Office, this study will identify the risk and threats to El Camino de Santiago as a Cultural Route. The threats visible nowadays due to the impact of mass tourism include transformations of tangible heritage, consumerism of the intangible, changes of local activities, loss in the authenticity of symbols and spiritual significance, and pilgrimage transformed into a tourism ‘product’, among others. The study will also approach some measures and solutions to mitigate those impacts and better preserve this type of cultural heritage. Therefore, this study will help the Route services providers and policymakers to better preserve the Cultural Route as a whole to ultimately improve the satisfying experience of pilgrims.Keywords: cultural routes, El Camino de Santiago, impact of tourism, intangible heritage
Procedia PDF Downloads 81324 Automated Facial Symmetry Assessment for Orthognathic Surgery: Utilizing 3D Contour Mapping and Hyperdimensional Computing-Based Machine Learning
Authors: Wen-Chung Chiang, Lun-Jou Lo, Hsiu-Hsia Lin
Abstract:
This study aimed to improve the evaluation of facial symmetry, which is crucial for planning and assessing outcomes in orthognathic surgery (OGS). Facial symmetry plays a key role in both aesthetic and functional aspects of OGS, making its accurate evaluation essential for optimal surgical results. To address the limitations of traditional methods, a different approach was developed, combining three-dimensional (3D) facial contour mapping with hyperdimensional (HD) computing to enhance precision and efficiency in symmetry assessments. The study was conducted at Chang Gung Memorial Hospital, where data were collected from 2018 to 2023 using 3D cone beam computed tomography (CBCT), a highly detailed imaging technique. A large and comprehensive dataset was compiled, consisting of 150 normal individuals and 2,800 patients, totaling 5,750 preoperative and postoperative facial images. These data were critical for training a machine learning model designed to analyze and quantify facial symmetry. The machine learning model was trained to process 3D contour data from the CBCT images, with HD computing employed to power the facial symmetry quantification system. This combination of technologies allowed for an objective and detailed analysis of facial features, surpassing the accuracy and reliability of traditional symmetry assessments, which often rely on subjective visual evaluations by clinicians. In addition to developing the system, the researchers conducted a retrospective review of 3D CBCT data from 300 patients who had undergone OGS. The patients’ facial images were analyzed both before and after surgery to assess the clinical utility of the proposed system. The results showed that the facial symmetry algorithm achieved an overall accuracy of 82.5%, indicating its robustness in real-world clinical applications. Postoperative analysis revealed a significant improvement in facial symmetry, with an average score increase of 51%. The mean symmetry score rose from 2.53 preoperatively to 3.89 postoperatively, demonstrating the system's effectiveness in quantifying improvements after OGS. These results underscore the system's potential for providing valuable feedback to surgeons and aiding in the refinement of surgical techniques. The study also led to the development of a web-based system that automates facial symmetry assessment. This system integrates HD computing and 3D contour mapping into a user-friendly platform that allows for rapid and accurate evaluations. Clinicians can easily access this system to perform detailed symmetry assessments, making it a practical tool for clinical settings. Additionally, the system facilitates better communication between clinicians and patients by providing objective, easy-to-understand symmetry scores, which can help patients visualize the expected outcomes of their surgery. In conclusion, this study introduced a valuable and highly effective approach to facial symmetry evaluation in OGS, combining 3D contour mapping, HD computing, and machine learning. The resulting system achieved high accuracy and offers a streamlined, automated solution for clinical use. The development of the web-based platform further enhances its practicality, making it a valuable tool for improving surgical outcomes and patient satisfaction in orthognathic surgery.Keywords: facial symmetry, orthognathic surgery, facial contour mapping, hyperdimensional computing
Procedia PDF Downloads 20323 Academic Achievement in Argentinean College Students: Major Findings in Psychological Assessment
Authors: F. Uriel, M. M. Fernandez Liporace
Abstract:
In the last decade, academic achievement in higher education has become a topic of agenda in Argentina, regarding the high figures of adjustment problems, academic failure and dropout, and the low graduation rates in the context of massive classes and traditional teaching methods. Psychological variables, such as perceived social support, academic motivation and learning styles and strategies have much to offer since their measurement by tests allows a proper diagnose of their influence on academic achievement. Framed in a major research, several studies analysed multiple samples, totalizing 5135 students attending Argentinean public universities. The first goal was aimed at the identification of statistically significant differences in psychological variables -perceived social support, learning styles, learning strategies, and academic motivation- by age, gender, and degree of academic advance (freshmen versus sophomores). Thus, an inferential group differences study for each psychological dependent variable was developed by means of student’s T tests, given the features of data distribution. The second goal, aimed at examining associations between the four psychological variables on the one hand, and academic achievement on the other, was responded by correlational studies, calculating Pearson’s coefficients, employing grades as the quantitative indicator of academic achievement. The positive and significant results that were obtained led to the formulation of different predictive models of academic achievement which had to be tested in terms of adjustment and predictive power. These models took the four psychological variables above mentioned as predictors, using regression equations, examining predictors individually, in groups of two, and together, analysing indirect effects as well, and adding the degree of academic advance and gender, which had shown their importance within the first goal’s findings. The most relevant results were: first, gender showed no influence on any dependent variable. Second, only good achievers perceived high social support from teachers, and male students were prone to perceive less social support. Third, freshmen exhibited a pragmatic learning style, preferring unstructured environments, the use of examples and simultaneous-visual processing in learning, whereas sophomores manifest an assimilative learning style, choosing sequential and analytic processing modes. Despite these features, freshmen have to deal with abstract contents and sophomores, with practical learning situations due to study programs in force. Fifth, no differences in academic motivation were found between freshmen and sophomores. However, the latter employ a higher number of more efficient learning strategies. Sixth, freshmen low achievers lack intrinsic motivation. Seventh, models testing showed that social support, learning styles and academic motivation influence learning strategies, which affect academic achievement in freshmen, particularly males; only learning styles influence achievement in sophomores of both genders with direct effects. These findings led to conclude that educational psychologists, education specialists, teachers, and universities must plan urgent and major changes. These must be applied in renewed and better study programs, syllabi and classes, as well as tutoring and training systems. Such developments should be targeted to the support and empowerment of students in their academic pathways, and therefore to the upgrade of learning quality, especially in the case of freshmen, male freshmen, and low achievers.Keywords: academic achievement, academic motivation, coping, learning strategies, learning styles, perceived social support
Procedia PDF Downloads 122322 Product Life Cycle Assessment of Generatively Designed Furniture for Interiors Using Robot Based Additive Manufacturing
Authors: Andrew Fox, Qingping Yang, Yuanhong Zhao, Tao Zhang
Abstract:
Furniture is a very significant subdivision of architecture and its inherent interior design activities. The furniture industry has developed from an artisan-driven craft industry, whose forerunners saw themselves manifested in their crafts and treasured a sense of pride in the creativity of their designs, these days largely reduced to an anonymous collective mass-produced output. Although a very conservative industry, there is great potential for the implementation of collaborative digital technologies allowing a reconfigured artisan experience to be reawakened in a new and exciting form. The furniture manufacturing industry, in general, has been slow to adopt new methodologies for a design using artificial and rule-based generative design. This tardiness has meant the loss of potential to enhance its capabilities in producing sustainable, flexible, and mass customizable ‘right first-time’ designs. This paper aims to demonstrate the concept methodology for the creation of alternative and inspiring aesthetic structures for robot-based additive manufacturing (RBAM). These technologies can enable the economic creation of previously unachievable structures, which traditionally would not have been commercially economic to manufacture. The integration of these technologies with the computing power of generative design provides the tools for practitioners to create concepts which are well beyond the insight of even the most accomplished traditional design teams. This paper aims to address the problem by introducing generative design methodologies employing the Autodesk Fusion 360 platform. Examination of the alternative methods for its use has the potential to significantly reduce the estimated 80% contribution to environmental impact at the initial design phase. Though predominantly a design methodology, generative design combined with RBAM has the potential to leverage many lean manufacturing and quality assurance benefits, enhancing the efficiency and agility of modern furniture manufacturing. Through a case study examination of a furniture artifact, the results will be compared to a traditionally designed and manufactured product employing the Ecochain Mobius product life cycle analysis (LCA) platform. This will highlight the benefits of both generative design and robot-based additive manufacturing from an environmental impact and manufacturing efficiency standpoint. These step changes in design methodology and environmental assessment have the potential to revolutionise the design to manufacturing workflow, giving momentum to the concept of conceiving a pre-industrial model of manufacturing, with the global demand for a circular economy and bespoke sustainable design at its heart.Keywords: robot, manufacturing, generative design, sustainability, circular econonmy, product life cycle assessment, furniture
Procedia PDF Downloads 139321 Official Game Account Analysis: Factors Influence Users' Judgments in Limited-Word Posts
Authors: Shanhua Hu
Abstract:
Social media as a critical propagandizing form of film, video games, and digital products has received substantial research attention, but there exists several critical barriers such as: (1) few studies exploring the internal and external connections of a product as part of the multimodal context that gives rise to readability and commercial return; (2) the lack of study of multimodal analysis in product’s official account of game publishers and its impact on users’ behaviors including purchase intention, social media engagement, and playing time; (3) no standardized ecologically-valid, game type-varying data can be used to study the complexity of official account’s postings within a time period. This proposed research helps to tackle these limitations in order to develop a model of readability study that is more ecologically valid, robust, and thorough. To accomplish this objective, this paper provides a more diverse dataset comprising different visual elements and messages collected from the official Twitter accounts of the Top 20 best-selling games of 2021. Video game companies target potential users through social media, a popular approach is to set up an official account to maintain exposure. Typically, major game publishers would create an official account on Twitter months before the game's release date to update on the game's development, announce collaborations, and reveal spoilers. Analyses of tweets from those official Twitter accounts would assist publishers and marketers in identifying how to efficiently and precisely deploy advertising to increase game sales. The purpose of this research is to determine how official game accounts use Twitter to attract new customers, specifically which types of messages are most effective at increasing sales. The dataset includes the number of days until the actual release date on Twitter posts, the readability of the post (Flesch Reading Ease Score, FRES), the number of emojis used, the number of hashtags, the number of followers of the mentioned users, the categorization of the posts (i.e., spoilers, collaborations, promotions), and the number of video views. The timeline of Twitter postings from official accounts will be compared to the history of pre-orders and sales figures to determine the potential impact of social media posts. This study aims to determine how the above-mentioned characteristics of official accounts' Twitter postings influence the sales of the game and to examine the possible causes of this influence. The outcome will provide researchers with a list of potential aspects that could influence people's judgments in limited-word posts. With the increased average online time, users would adapt more quickly than before in online information exchange and readings, such as the word to use sentence length, and the use of emojis or hashtags. The study on the promotion of official game accounts will not only enable publishers to create more effective promotion techniques in the future but also provide ideas for future research on the influence of social media posts with a limited number of words on consumers' purchasing decisions. Future research can focus on more specific linguistic aspects, such as precise word choice in advertising.Keywords: engagement, official account, promotion, twitter, video game
Procedia PDF Downloads 74320 Reimagining Landscapes: Psychological Responses and Behavioral Shifts in the Aftermath of the Lytton Creek Fire
Authors: Tugba Altin
Abstract:
In an era where the impacts of climate change resonate more pronouncedly than ever, communities globally grapple with events bearing both tangible and intangible ramifications. Situating this within the evolving landscapes of Psychological and Behavioral Sciences, this research probes the profound psychological and behavioral responses evoked by such events. The Lytton Creek Fire of 2021 epitomizes these challenges. While tangible destruction is immediate and evident, the intangible repercussions—emotional distress, disintegration of cultural landscapes, and disruptions in place attachment (PA)—require meticulous exploration. PA, emblematic of the emotional and cognitive affiliations individuals nurture with their environments, emerges as a cornerstone for comprehending how environmental cataclysms influence cultural identity and bonds to land. This study, harmonizing the core tenets of an interpretive phenomenological approach with a hermeneutic framework, underscores the pivotal nature of this attachment. It delves deep into the realm of individuals' experiences post the Lytton Creek Fire, unraveling the intricate dynamics of PA amidst such calamity. The study's methodology deviates from conventional paradigms. Instead of traditional interview techniques, it employs walking audio sessions and photo elicitation methods, granting participants the agency to immerse, re-experience, and vocalize their sentiments in real-time. Such techniques shed light on spatial narratives post-trauma and capture the otherwise elusive emotional nuances, offering a visually rich representation of place-based experiences. Central to this research is the voice of the affected populace, whose lived experiences and testimonies form the nucleus of the inquiry. As they renegotiate their bonds with transformed environments, their narratives reveal the indispensable role of cultural landscapes in forging place-based identities. Such revelations accentuate the necessity of integrating both tangible and intangible trauma facets into community recovery strategies, ensuring they resonate more profoundly with affected individuals. Bridging the domains of environmental psychology and behavioral sciences, this research accentuates the intertwined nature of tangible restoration with the imperative of emotional and cultural recuperation post-environmental disasters. It advocates for adaptation initiatives that are rooted in the lived realities of the affected, emphasizing a holistic approach that recognizes the profundity of human connections to landscapes. This research advocates the interdisciplinary exchange of ideas and strategies in addressing post-disaster community recovery strategies. It not only enriches the climate change discourse by emphasizing the human facets of disasters but also reiterates the significance of an interdisciplinary approach, encompassing psychological and behavioral nuances, for fostering a comprehensive understanding of climate-induced traumas. Such a perspective is indispensable for shaping more informed, empathetic, and effective adaptation strategies.Keywords: place attachment, community recovery, disaster response, restorative landscapes, sensory response, visual methodologies
Procedia PDF Downloads 57