Search results for: Digital Image Correlation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8601

Search results for: Digital Image Correlation

501 The Value of Computerized Corpora in EFL Textbook Design: The Case of Modal Verbs

Authors: Lexi Li

Abstract:

This study aims to contribute to the field of how computer technology can be exploited to enhance EFL textbook design. Specifically, the study demonstrates how computerized native and learner corpora can be used to enhance modal verb treatment in EFL textbooks. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because the pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the “secondary school” section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was compared with the textbook corpus in terms of the use (distributional features, semantic functions, and co-occurring constructions) in order to examine the degree of influence of the textbook on learners’ use of modal verbs. Moreover, the learner corpus was analyzed for the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The results indicate discrepancies between the textbook presentation of modal verbs and authentic modal use in natural discourse in terms of distributions of frequencies, semantic functions, and co-occurring structures. Furthermore, there are consistent patterns of use between the learner corpus and the textbook corpus with respect to the three above-mentioned aspects, except could, will and must, partially confirming the correlation between the frequency effects and L2 grammar acquisition. Further analysis reveals that the exceptions are caused by both positive and negative L1 transfer, indicating that the frequency effects can be intercepted by L1 interference. Besides, error analysis revealed that could, would, should and must are the most difficult for Chinese learners due to both inter-linguistic and intra-linguistic interference. The discrepancies between the textbook corpus and the native corpus point to a need to adjust the presentation of modal verbs in the textbooks in terms of frequencies, different meanings, and verb-phrase structures. Along with the adjustment of modal verb treatment based on authentic use, it is important for textbook writers to take into consideration the L1 interference as well as learners’ difficulties in their use of modal verbs. The present study is a methodological showcase of the combination both native and learner corpora in the enhancement of EFL textbook language authenticity and appropriateness for learners.

Keywords: EFL textbooks, learner corpus, modal verbs, native corpus

Procedia PDF Downloads 104
500 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments

Authors: Skyler Kim

Abstract:

An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.

Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning

Procedia PDF Downloads 175
499 The Legal Nature of Grading Decisions and the Implications for Handling of Academic Complaints in or out of Court: A Comparative Legal Analysis of Academic Litigation in Europe

Authors: Kurt Willems

Abstract:

This research examines complaints against grading in higher education institutions in four different European regions: England and Wales, Flanders, the Netherlands, and France. The aim of the research is to examine the correlation between the applicable type of complaint handling on the one hand, and selected qualities of the higher education landscape and of public law on the other hand. All selected regions report a rising number of complaints against grading decisions, not only as to internal complaint handling within the institution but also judicially if the dispute persists. Some regions deem their administrative court system appropriate to deal with grading disputes (France) or have even erected a specialty administrative court to facilitate access (Flanders, the Netherlands). However, at the same time, different types of (governmental) dispute resolution bodies have been established outside of the judicial court system (England and Wales, and to lesser extent France and the Netherlands). Those dispute procedures do not seem coincidental. Public law issues such as the underlying legal nature of the education institution and, eventually, the grading decision itself, have an impact on the way the academic complaint procedures are developed. Indeed, in most of the selected regions, contractual disputes enjoy different legal protection than administrative decisions, making the legal qualification of the relationship between student and higher education institution highly relevant. At the same time, the scope of competence of government over different types of higher education institutions; albeit direct or indirect (o.a. through financing and quality control) is relevant as well to comprehend why certain dispute handling procedures have been established for students. To answer the above questions, the doctrinal and comparative legal method is used. The normative framework is distilled from the relevant national legislative rules and their preparatory texts, the legal literature, the (published) case law of academic complaints and the available governmental reports. The research is mainly theoretical in nature, examining different topics of public law (mainly administrative law) and procedural law in the context of grading decisions. The internal appeal procedure within the education institution is largely left out of the scope of the research, as well as different types of non-governmental-imposed cooperation between education institutions, given the public law angle of the research questions. The research results in the categorization of different academic complaint systems, and an analysis of the possibility to introduce each of those systems in different countries, depending on their public law system and higher education system. By doing so, the research also adds to the debate on the public-private divide in higher education systems, and its effect on academic complaints handling.

Keywords: higher education, legal qualification of education institution, legal qualification of grading decisions, legal protection of students, academic litigation

Procedia PDF Downloads 214
498 Prevalence of Fast-Food Consumption on Overweight or Obesity on Employees (Age Between 25-45 Years) in Private Sector; A Cross-Sectional Study in Colombo, Sri Lanka

Authors: Arosha Rashmi De Silva, Ananda Chandrasekara

Abstract:

This study seeks to comprehensively examine the influence of fast-food consumption and physical activity levels on the body weight of young employees within the private sector of Sri Lanka. The escalating popularity of fast food has raised concerns about its nutritional content and associated health ramifications. To investigate this phenomenon, a cohort of 100 individuals aged between 25 and 45, employed in Sri Lanka's private sector, participated in this research. These participants provided socio-demographic data through a standardized questionnaire, enabling the characterization of their backgrounds. Additionally, participants disclosed their frequency of fast-food consumption and engagement in physical activities, utilizing validated assessment tools. The collected data was meticulously compiled into an Excel spreadsheet and subjected to rigorous statistical analysis. Descriptive statistics, such as percentages and proportions, were employed to delineate the body weight status of the participants. Employing chi-square tests, our study identified significant associations between fast-food consumption, levels of physical activity, and body weight categories. Furthermore, through binary logistic regression analysis, potential risk factors contributing to overweight and obesity within the young employee cohort were elucidated. Our findings revealed a disconcerting trend, with 6% of participants classified as underweight, 32% within the normal weight range, and a substantial 62% categorized as overweight or obese. These outcomes underscore the alarming prevalence of overweight and obesity among young private-sector employees, particularly within the bustling urban landscape of Colombo, Sri Lanka. The data strongly imply a robust correlation between fast-food consumption, sedentary behaviors, and higher body weight categories, reflective of the evolving lifestyle patterns associated with the nation's economic growth. This study emphasizes the urgent need for effective interventions to counter the detrimental effects of fast-food consumption. The implementation of awareness campaigns elucidating the adverse health consequences of fast food, coupled with comprehensive nutritional education, can empower individuals to make informed dietary choices. Workplace interventions, including the provision of healthier meal alternatives and the facilitation of physical activity opportunities, are essential in fostering a healthier workforce and mitigating the escalating burden of overweight and obesity in Sri Lanka

Keywords: fast food consumption, obese, overweight, physical activity level

Procedia PDF Downloads 24
497 Innovative Food Related Modification of the Day-Night Task Demonstrates Impaired Inhibitory Control among Patients with Binge-Purge Eating Disorder

Authors: Sigal Gat-Lazer, Ronny Geva, Dan Ramon, Eitan Gur, Daniel Stein

Abstract:

Introduction: Eating disorders (ED) are common psychopathologies which involve distorted body image and eating disturbances. Binge-purge eating disorders (B/P ED) are characterized by repetitive events of binge eating followed by purges. Patients with B/P ED behavior may be seen as impulsive especially when relate to food stimulation and affective conditions. The current study included innovative modification of the day-night task targeted to assess inhibitory control among patients with B/P ED. Methods: This prospective study included 50 patients with B/P ED during acute phase of illness (T1) upon their admission to specialized ED department in tertiary center. 34 patients repeated the study towards discharge to ambulatory care (T2). Treatment effect was evaluated by BMI and emotional questionnaires regarding depression and anxiety by the Beck Depression Inventory and State Trait Anxiety Inventory questionnaires. Control group included 36 healthy controls with matched demographic parameters who performed both T1 and T2 assessments. The current modification is based on the emotional day-night task (EDNT) which involves five emotional stimulation added to the sun and moon pictures presented to participants. In the current study, we designed the food-emotional modification day night task (F-EDNT) food stimulations of egg and banana which resemble the sun and moon, respectively, in five emotional states (angry, sad, happy, scrambled and neutral). During this computerized task, participants were instructed to push on “day” bottom in response to moon and banana stimulations and on “night” bottom when sun and egg were presented. Accuracy (A) and reaction time (RT) were evaluated and compared between EDNT and F-EDNT as a reflection of participants’ inhibitory control. Results: Patients with B/P ED had significantly improved BMI, depression and anxiety scores on T2 compared to T1 (all p<0.001). Task performance was similar among patients and controls in the EDNT without significant A or RT differences in both T1 and T2. On F-EDNT during T1, B/P ED patients had significantly reduced accuracy in 4/5 emotional stimulation compared to controls: angry (73±25% vs. 84±15%, respectively), sad (69±25% vs. 80±18%, respectively), happy (73±24% vs. 82±18%, respectively) and scrambled (74±24% vs. 84±13%, respectively, all p<0.05). Additionally, patients’ RT to food stimuli was significantly faster compared to neutral ones, in both cry and neutral emotional stimulations (356±146 vs. 400±141 and 378±124 vs. 412±116 msec, respectively, p<0.05). These significant differences between groups as a function of stimulus type were diminished on T2. Conclusion: Having to process food related content, in particular in emotional context seems to be impaired in patients with B/P ED during the acute phase of their illness and elicits greater impulsivity. Innovative modification using such procedures seem to be sensitive to patients’ illness phase and thus may be implemented during screening and follow up through the clinical management of these patients.

Keywords: binge purge eating disorders, day night task modification, eating disorders, food related stimulations

Procedia PDF Downloads 368
496 A Principal’s Role in Creating and Sustaining an Inclusive Environment

Authors: Yazmin Pineda Zapata

Abstract:

Leading a complete school and culture transformation can be a daunting task for any administrator. This is especially true when change agents are advocating for inclusive reform in their schools. As leaders embark on this journey, they must ascertain that an inclusive environment is not a place, a classroom, or a resource setting; it is a place of acceptance nurtured by supportive and meaningful learning opportunities where all students can thrive. A qualitative approach, phenomenology, was used to investigate principals’ actions and behaviors that supported inclusive schooling for students with disabilities. Specifically, this study sought to answer the following research question: How do leaders develop and maintain inclusive education? Fourteen K-12 principals purposefully selected from various sources (e.g., School Wide Integrated Framework for Transformation (SWIFT), The Maryland Coalition for Inclusive Education (MCIE), The Arc of Texas Inclusion Works organization, The Association for Persons with Severe Handicaps (TASH), the CAL State Summer Institute in San Marcos, and the PEAK Parent Center and/or other recognitions were interviewed individually using a semi-structured protocol. Upon completion of data collection, all interviews were transcribed and marked using A priori coding to analyze the responses and establish a correlation among Villa and Thousand’s five organizational supports to achieve inclusive educational reform: Vision, Skills, Incentives, Resources, and Action Plan. The findings of this study reveal the insights of principals who met specific criteria and whose schools had been highlighted as exemplary inclusive schools. Results show that by implementing the five organizational supports, principals were able to develop and sustain successful inclusive environments where both teachers and students were motivated, made capable, and supported through the redefinition and restructuring of systems within the school. Various key details of the five variables for change depict essential components within these systems, which include quality professional development, coaching and modeling of co-teaching strategies, collaborative co-planning, teacher leadership, and continuous stakeholder (e.g., teachers, students, support staff, and parents) involvement. The administrators in this study proved the valuable benefits of inclusive education for students with disabilities and their typically developing peers. Together, along with their teaching and school community, school leaders became capable stakeholders that promoted the vision of inclusion, planned a structured approach, and took action to make it a reality.

Keywords: Inclusive education, leaders, principals, shared-decision making, shared leadership, special education, sustainable change

Procedia PDF Downloads 55
495 Association between TNF-α and Its Receptor TNFRSF1B Polymorphism with Pulmonary Tuberculosis in Tomsk, Russia Federation

Authors: K. A. Gladkova, N. P. Babushkina, E. Y. Bragina

Abstract:

Purpose: Tuberculosis (TB), caused by Mycobacterium tuberculosis, is one of the major public health problems worldwide. It is clear that the immune response to M. tuberculosis infection is a relationship between inflammatory and anti-inflammatory responses in which Tumour Necrosis Factor-α (TNF-α) plays key roles as a pro-inflammatory cytokine. TNF-α involved in various cell immune responses via binding to its two types of membrane-bound receptors, TNFRSF1A and TNFRSF1B. Importantly, some variants of the TNFRSF1B gene have been considered as possible markers of host susceptibility to TB. However, the possible impact of such TNF-α and its receptor genes polymorphism on TB cases in Tomsk is missing. Thus, the purpose of our study was to investigate polymorphism of TNF-α (rs1800629) and its receptor TNFRSF1B (rs652625 and rs525891) genes in population of Tomsk and to evaluate their possible association with the development of pulmonary TB. Materials and Methods: The population distribution features of genes polymorphisms were investigated and made case-control study based on group of people from Tomsk. Human blood was collected during routine patients examination at Tomsk Regional TB Dispensary. Altogether, 234 TB-positive patients (80 women, 154 men, average age is 28 years old) and 205 health-controls (153 women, 52 men, average age is 47 years old) were investigated. DNA was extracted from blood plasma by phenol-chloroform method. Genotyping was carried out by a single-nucleotide-specific real-time PCR assay. Results: First, interpopulational comparison was carried out between healthy individuals from Tomsk and available data from the 1000 Genomes project. It was found that polymorphism rs1800629 region demonstrated that Tomsk population was significantly different from Japanese (P = 0.0007), but it was similar with the following Europeans subpopulations: Italians (P = 0.052), Finns (P = 0.124) and British (P = 0.910). Polymorphism rs525891 clear demonstrated that group from Tomsk was significantly different from population of South Africa (P = 0.019). However, rs652625 demonstrated significant differences from Asian population: Chinese (P = 0.03) and Japanese (P = 0.004). Next, we have compared healthy individuals versus patients with TB. It was detected that no association between rs1800629, rs652625 polymorphisms, and positive TB cases. Importantly, AT genotype of polymorphism rs525891 was significantly associated with resistance to TB (odds ratio (OR) = 0.61; 95% confidence interval (CI): 0.41-0.9; P < 0.05). Conclusion: To the best of our knowledge, the polymorphism of TNFRSF1B (rs525891) was associated with TB, while genotype AT is protective [OR = 0.61] in Tomsk population. In contrast, no significant correlation was detected between polymorphism TNF-α (rs1800629) and TNFRSF1B (rs652625) genes and alveolar TB cases among population of Tomsk. In conclusion, our data expands the molecular particularities associated with TB. The study was supported by the grant of the Russia for Basic Research #15-04-05852.

Keywords: polymorphism, tuberculosis, TNF-α, TNFRSF1B gene

Procedia PDF Downloads 162
494 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector

Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini

Abstract:

Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.

Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products

Procedia PDF Downloads 133
493 The Rite of Jihadification in ISIS Modified Video Games: Mass Deception and Dialectic of Religious Regression in Technological Progression

Authors: Venus Torabi

Abstract:

ISIS, the terrorist organization, modified two videogames, ARMA III and Grand Theft Auto 5 (2013) as means of online recruitment and ideological propaganda. The urge to study the mechanism at work, whether it has been successful or not, derives (Digital) Humanities experts to explore how codes of terror, Islamic ideology and recruitment strategies are incorporated into the ludic mechanics of videogames. Another aspect of the significance lies in the fact that this is a latent problem that has not been fully addressed in an interdisciplinary framework prior to this study, to the best of the researcher’s knowledge. Therefore, due to the complexity of the subject, the present paper entangles with game studies, philosophical and religious poles to form the methodology of conducting the research. As a contextualized epistemology of such exploitation of videogames, the core argument is building on the notion of “Culture Industry” proposed by Theodore W. Adorno and Max Horkheimer in Dialectic of Enlightenment (2002). This article posits that the ideological underpinnings of ISIS’s cause corroborated by the action-bound mechanics of the videogames are in line with adhering to the Islamic Eschatology as a furnishing ground and an excuse in exercising terrorism. It is an account of ISIS’s modification of the videogames, a tool of technological progression to practice online radicalization. Dialectically, this practice is packed up in rhetoric for recognizing a religious myth (the advent of a savior), as a hallmark of regression. The study puts forth that ISIS’s wreaking havoc on the world, both in reality and within action videogames, is negotiating the process of self-assertion in the players of such videogames (by assuming one’s self a member of terrorists) that leads to self-annihilation. It tries to unfold how ludic Mod videogames are misused as tools of mass deception towards ethnic cleansing in reality and line with the distorted Eschatological myth. To conclude, this study posits videogames to be a new avenue of mass deception in the framework of the Culture Industry. Yet, this emerges as a two-edged sword of mass deception in ISIS’s modification of videogames. It shows that ISIS is not only trying to hijack the minds through online/ludic recruitment, it potentially deceives the Muslim communities or those prone to radicalization into believing that it's terrorist practices are preparing the world for the advent of a religious savior based on Islamic Eschatology. This is to claim that the harsh actions of the videogames are potentially breeding minds by seeds of terrorist propaganda and numbing them to violence. The real world becomes an extension of that harsh virtual environment in a ludic/actual continuum, the extension that is contributing to the mass deception mechanism of the terrorists, in a clandestine trend.

Keywords: culture industry, dialectic, ISIS, islamic eschatology, mass deception, video games

Procedia PDF Downloads 126
492 CSR Communication Strategies: Stakeholder and Institutional Theories Perspective

Authors: Stephanie Gracelyn Rahaman, Chew Yin Teng, Manjit Singh Sandhu

Abstract:

Corporate scandals have made stakeholders apprehensive of large companies and expect greater transparency in CSR matters. However, companies find it challenging to strategically communicate CSR to intended stakeholders and in the process may fall short on maximizing on CSR efforts. Given that stakeholders have the ability to either reward good companies or take legal action or boycott against corporate brands who do not act socially responsible, companies must create shared understanding of their CSR activities. As a result, communication has become a strategy for many companies to demonstrate CSR engagement and to minimize stakeholder skepticism. The main objective of this research is to examine the types of CSR communication strategies and predictors that guide CSR communication strategies. Employing Morsing & Schultz’s guide on CSR communication strategies, the study integrates stakeholder and institutional theory to develop a conceptual framework. The conceptual framework hypothesized that stakeholder (instrumental and normative) and institutional (regulatory environment, nature of business, mimetic intention, CSR focus and corporate objectives) dimensions would drive CSR communication strategies. Preliminary findings from semi-structured interviews in Malaysia are consistent with the conceptual model in that stakeholder and institutional expectations guide CSR communication strategies. Findings show that most companies use two-way communication strategies. Companies that identified employees, the public or customers as key stakeholders have started to embrace social media to be in-sync with new trends of communication. This is especially with the Gen Y which is their priority. Some companies creatively use multiple communication channels because they recognize different stakeholders favor different communication channels. Therefore, it appears that companies use two-way communication strategies to complement the perceived limitation of one-way communication strategies as some companies prefer a more interactive platform to strategically engage stakeholders in CSR communication. In addition to stakeholders, institutional expectations also play a vital role in influencing CSR communication. Due to industry peer pressures, corporate objectives (attract international investors and customers), companies may be more driven to excel in social performance. For these reasons companies tend to go beyond the basic mandatory requirement, excel in CSR activities and be known as companies that champion CSR. In conclusion, companies use more two-way than one-way communication and companies use a combination of one and two-way communication to target different stakeholders resulting from stakeholder and institutional dimensions. Finally, in order to find out if the conceptual framework actually fits the Malaysian context, companies’ responses for expected organizational outcomes from communicating CSR were gathered from the interview transcripts. Thereafter, findings are presented to show some of the key organizational outcomes (visibility and brand recognition, portray responsible image, attract prospective employees, positive word-of-mouth, etc.) that companies in Malaysia expect from CSR communication. Based on these findings the conceptual framework has been refined to show the new identified organizational outcomes.

Keywords: CSR communication, CSR communication strategies, stakeholder theory, institutional theory, conceptual framework, Malaysia

Procedia PDF Downloads 270
491 Verification of Low-Dose Diagnostic X-Ray as a Tool for Relating Vital Internal Organ Structures to External Body Armour Coverage

Authors: Natalie A. Sterk, Bernard van Vuuren, Petrie Marais, Bongani Mthombeni

Abstract:

Injuries to the internal structures of the thorax and abdomen remain a leading cause of death among soldiers. Body armour is a standard issue piece of military equipment designed to protect the vital organs against ballistic and stab threats. When configured for maximum protection, the excessive weight and size of the armour may limit soldier mobility and increase physical fatigue and discomfort. Providing soldiers with more armour than necessary may, therefore, hinder their ability to react rapidly in life-threatening situations. The capability to determine the optimal trade-off between the amount of essential anatomical coverage and hindrance on soldier performance may significantly enhance the design of armour systems. The current study aimed to develop and pilot a methodology for relating internal anatomical structures with actual armour plate coverage in real-time using low-dose diagnostic X-ray scanning. Several pilot scanning sessions were held at Lodox Systems (Pty) Ltd head-office in South Africa. Testing involved using the Lodox eXero-dr to scan dummy trunk rigs at various degrees and heights of measurement; as well as human participants, wearing correctly fitted body armour while positioned in supine, prone shooting, seated and kneeling shooting postures. The verification of sizing and metrics obtained from the Lodox eXero-dr were then confirmed through a verification board with known dimensions. Results indicated that the low-dose diagnostic X-ray has the capability to clearly identify the vital internal structures of the aortic arch, heart, and lungs in relation to the position of the external armour plates. Further testing is still required in order to fully and accurately identify the inferior liver boundary, inferior vena cava, and spleen. The scans produced in the supine, prone, and seated postures provided superior image quality over the kneeling posture. The X-ray-source and-detector distance from the object must be standardised to control for possible magnification changes and for comparison purposes. To account for this, specific scanning heights and angles were identified to allow for parallel scanning of relevant areas. The low-dose diagnostic X-ray provides a non-invasive, safe, and rapid technique for relating vital internal structures with external structures. This capability can be used for the re-evaluation of anatomical coverage required for essential protection while optimising armour design and fit for soldier performance.

Keywords: body armour, low-dose diagnostic X-ray, scanning, vital organ coverage

Procedia PDF Downloads 106
490 Consumer Cognitive Models of Vaccine Attitudes: Behavioral Informed Strategies Promoting Vaccination Policy in Greece

Authors: Halkiopoulos Constantinos, Koutsopoulou Ioanna, Gkintoni Evgenia, Antonopoulou Hera

Abstract:

Immunization appears to be an essential part of health care service in times of pandemics such as covid-19 and aims not only to protect the health of the population but also the health and sustainability of the economies of the countries affected. It is reported that more than 3.44 billion doses have been administered so far, which accounts for 45 doses for 100 people. Vaccination programs in various countries have been promoted and accepted by people differently and therefore they proceeded in different ways and speed; most countries directing them towards people with vulnerable chronic or recent health statuses. Large scale restriction measures or lockdown, personal protection measures such as masks and gloves and a decrease in leisure and sports activities were also implemented around the world as part of the protection health strategies against the covid-19 pandemic. This research aims to present an analysis based on variations on people’s attitudes towards vaccination based on demographic, social and epidemiological characteristics, and health status on the one hand and perception of health, health satisfaction, pain, and quality of life on the other hand. 1500 Greek e-consumers participated in the research, mainly through social media who took part in an online-based survey voluntarily. The questionnaires included demographic, social and medical characteristics of the participants, and questions asking people’s willingness to be vaccinated and their opinion on whether there should be a vaccine against covid-19. Other stressor factors were also reported in the questionnaires and participants’ loss of someone close due to covid-19, or staying at home quarantine due to being infected from covid-19. WHOQUOL-BREF and GLOBAL PSYCHOTRAUMA SCREEN- GPS were used with kind permission from WHO and from the International Society for Traumatic Stress Studies in this study. Attitudes towards vaccination varied significantly related to aging, level of education, health status and consumer behavior. Health professionals’ attitudes also varied in relation to age, level of education, profession, health status and consumer needs. Vaccines have been the most common technological aid of human civilization so far in the fight against viruses. The results of this study can be used for health managers and digital marketers of pharmaceutical companies and also other staff involved in vaccination programs and for designing health policy immunization strategies during pandemics in order to achieve positive attitudes towards vaccination and larger populations being vaccinated in shorter periods of time after the break out of pandemic. Health staff needs to be trained, aided and supervised to go through with vaccination programs and to be protected through vaccination programs themselves. Feedback in each country’s vaccination program, short backs, deficiencies and delays should be addressed and worked out.

Keywords: consumer behavior, cognitive models, vaccination policy, pandemic, Covid-19, Greece

Procedia PDF Downloads 175
489 The Direct Deconvolutional Model in the Large-Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

The utilization of Large Eddy Simulation (LES) has been extensive in turbulence research. LES concentrates on resolving the significant grid-scale motions while representing smaller scales through subfilter-scale (SFS) models. The deconvolution model, among the available SFS models, has proven successful in LES of engineering and geophysical flows. Nevertheless, the thorough investigation of how sub-filter scale dynamics and filter anisotropy affect SFS modeling accuracy remains lacking. The outcomes of LES are significantly influenced by filter selection and grid anisotropy, factors that have not been adequately addressed in earlier studies. This study examines two crucial aspects of LES: Firstly, the accuracy of direct deconvolution models (DDM) is evaluated concerning sub-filter scale (SFS) dynamics across varying filter-to-grid ratios (FGR) in isotropic turbulence. Various invertible filters are employed, including Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The importance of FGR becomes evident as it plays a critical role in controlling errors for precise SFS stress prediction. When FGR is set to 1, the DDM models struggle to faithfully reconstruct SFS stress due to inadequate resolution of SFS dynamics. Notably, prediction accuracy improves when FGR is set to 2, leading to accurate reconstruction of SFS stress, except for cases involving Helmholtz I and II filters. Remarkably high precision, nearly 100%, is achieved at an FGR of 4 for all DDM models. Furthermore, the study extends to filter anisotropy and its impact on SFS dynamics and LES accuracy. By utilizing the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with anisotropic filters, aspect ratios (AR) ranging from 1 to 16 are examined in LES filters. The results emphasize the DDM’s proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. Notably high correlation coefficients exceeding 90% are observed in the a priori study for the DDM’s reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as filter anisotropy increases. In the a posteriori analysis, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, including velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strainrate tensors, and SFS stress. It is evident that as filter anisotropy intensifies, the results of DSM and DMM deteriorate, while the DDM consistently delivers satisfactory outcomes across all filter-anisotropy scenarios. These findings underscore the potential of the DDM framework as a valuable tool for advancing the development of sophisticated SFS models for LES in turbulence research.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 55
488 Exploring the Relationship Between Helicobacter Pylori Infection and the Incidence of Bronchogenic Carcinoma

Authors: Jose R. Garcia, Lexi Frankel, Amalia Ardeljan, Sergio Medina, Ali Yasback, Omar Rashid

Abstract:

Background: Helicobacter pylori (H. pylori) is a gram-negative, spiral-shaped bacterium that affects nearly half of the population worldwide and humans serve as the principal reservoir. Infection rates usually follow an inverse relationship with hygiene practices and are higher in developing countries than developed countries. Incidence varies significantly by geographic area, race, ethnicity, age, and socioeconomic status. H. pylori is primarily associated with conditions of the gastrointestinal tract such as atrophic gastritis and duodenal peptic ulcers. Infection is also associated with an increased risk of carcinogenesis as there is evidence to show that H. pylori infection may lead to gastric adenocarcinoma and mucosa-associated lymphoid tissue (MALT) lymphoma. It is suggested that H. pylori infection may be considered as a systemic condition, leading to various novel associations with several different neoplasms such as colorectal cancer, pancreatic cancer, and lung cancer, although further research is needed. Emerging evidence suggests that H. pylori infection may offer protective effects against Mycobacterium tuberculosis as a result of non-specific induction of interferon- γ (IFN- γ). Similar methods of enhanced immunity may affect the development of bronchogenic carcinoma due to the antiproliferative, pro-apoptotic and cytostatic functions of IFN- γ. The purpose of this study was to evaluate the correlation between Helicobacter pylori infection and the incidence of bronchogenic carcinoma. Methods: The data was provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database to evaluate the patients infected versus patients not infected with H. pylori using ICD-10 and ICD-9 codes. Access to the database was granted by the Holy Cross Health, Fort Lauderdale for the purpose of academic research. Standard statistical methods were used. Results:-Between January 2010 and December 2019, the query was analyzed and resulted in 163,224 in both the infected and control group, respectively. The two groups were matched by age range and CCI score. The incidence of bronchogenic carcinoma was 1.853% with 3,024 patients in the H. pylori group compared to 4.785% with 7,810 patients in the control group. The difference was statistically significant (p < 2.22x10-16) with an odds ratio of 0.367 (0.353 - 0.383) with a confidence interval of 95%. The two groups were matched by treatment and incidence of cancer, which resulted in a total of 101,739 patients analyzed after this match. The incidence of bronchogenic carcinoma was 1.929% with 1,962 patients in the H. pylori and treatment group compared to 4.618% with 4,698 patients in the control group with treatment. The difference was statistically significant (p < 2.22x10-16) with an odds ratio of 0.403 (0.383 - 0.425) with a confidence interval of 95%.

Keywords: bronchogenic carcinoma, helicobacter pylori, lung cancer, pathogen-associated molecular patterns

Procedia PDF Downloads 172
487 Collaborative Procurement in the Pursuit of Net- Zero: A Converging Journey

Authors: Bagireanu Astrid, Bros-Williamson Julio, Duncheva Mila, Currie John

Abstract:

The Architecture, Engineering, and Construction (AEC) sector plays a critical role in the global transition toward sustainable and net-zero built environments. However, the industry faces unique challenges in planning for net-zero while struggling with low productivity, cost overruns and overall resistance to change. Traditional practices fall short due to their inability to meet the requirements for systemic change, especially as governments increasingly demand transformative approaches. Working in silos and rigid hierarchies and a short-term, client-centric approach prioritising immediate gains over long-term benefit stands in stark contrast to the fundamental requirements for the realisation of net-zero objectives. These practices have limited capacity to effectively integrate AEC stakeholders and promote the essential knowledge sharing required to address the multifaceted challenges of achieving net-zero. In the context of built environment, procurement may be described as the method by which a project proceeds from inception to completion. Collaborative procurement methods under the Integrated Practices (IP) umbrella have the potential to align more closely with net-zero objectives. This paper explores the synergies between collaborative procurement principles and the pursuit of net zero in the AEC sector, drawing upon the shared values of cross-disciplinary collaboration, Early Supply Chain involvement (ESI), use of standards and frameworks, digital information management, strategic performance measurement, integrated decision-making principles and contractual alliancing. To investigate the role of collaborative procurement in advancing net-zero objectives, a structured research methodology was employed. First, the study focuses on a systematic review on the application of collaborative procurement principles in the AEC sphere. Next, a comprehensive analysis is conducted to identify common clusters of these principles across multiple procurement methods. An evaluative comparison between traditional procurement methods and collaborative procurement for achieving net-zero objectives is presented. Then, the study identifies the intersection between collaborative procurement principles and the net-zero requirements. Lastly, an exploration of key insights for AEC stakeholders focusing on the implications and practical applications of these findings is made. Directions for future development of this research are recommended. Adopting collaborative procurement principles can serve as a strategic framework for guiding the AEC sector towards realising net-zero. Synergising these approaches overcomes fragmentation, fosters knowledge sharing, and establishes a net-zero-centered ecosystem. In the context of the ongoing efforts to amplify project efficiency within the built environment, a critical realisation of their central role becomes imperative for AEC stakeholders. When effectively leveraged, collaborative procurement emerges as a powerful tool to surmount existing challenges in attaining net-zero objectives.

Keywords: collaborative procurement, net-zero, knowledge sharing, architecture, built environment

Procedia PDF Downloads 59
486 Articulating the Colonial Relation, a Conversation between Afropessimism and Anti-Colonialism

Authors: Thomas Compton

Abstract:

As Decolonialism becomes an important topic in Political Theory, the rupture between the colonized and the colonist relation has lost attention. Focusing on the anti-colonial activist Madhi Amel, we shall consider his attention to the permanence of the colonial relation and how it preempts Frank Wilderson’s formulation of (white) culturally necessary Anti-Black violence. Both projects draw attention away from empirical accounts of oppression, instead focusing on the structural relation which precipitates them. As Amel says that we should stop thinking of the ‘underdeveloped’ as beyond the colonial relation, Wilderson says we should stop think of the Black rights that have surpassed the role of the slave. However, Amel moves beyond his idol Althusser’s Structuralism toward a formulation of the colonial relation as source of domination. Our analysis will take a Lacanian turn in considering how this non-relation was formulated as a relation how this space of negativity became a ideological opportunity for Colonial domination. Wilderson’s work shall problematise this as we conclude with his criticisms of Structural accounts for the failure to consider how Black social death exists as more than necessity but a cite of white desire. Amel, a Lebanese activist and scholar (re)discovered by Hicham Safieddine, argues colonialism is more than the theft of land, but instead a privatization of collective property and form of investment which (re)produces the status of the capitalist in spaces ‘outside’ the market. Although Amel was a true Marxist-Leninsist, who exposited the economic determinacy of the Colonial Mode of Production, we are reading this account through Alenka Zupančič’s reformulation of the ‘invisible hand job of the market’. Amel points to the signifier ‘underdeveloped’ as buttressed on a pre-colonial epistemic break, as the Western investor (debt collector) sees the (post?) colony narcissistic image. However, the colony can never become site of class conflict, as the workers are not unified but existing between two countries. In industry, they are paid in Colonial subjectivisation, the promise of market (self)pleasure, at home, they are refugees. They are not, as Wilderson states, in the permanent social death of the slave, but they are less than the white worker. This is formulated as citizen (white), non-citizen (colonized), anti-citizen (Black/slave). Here we may also think of how indentured Indians were used as instruments of colonial violence. Wilderson’s aphorism “there is no analogy to anti-Black violence” lays bare his fundamental opposition between colonial and specifically anti-Black violence. It is not only that the debt collector, landowner, or other owners of production pleasures themselves as if their hand is invisible. The absolute negativity between colony and colonized provides a new frontier for desire, the development of a colonial mode of production. An invention inside the colonial structure that is generative of class substitution. We shall explore how Amel ignores the role of the slave but how Wilderson forecloses the history African anti-colonial.

Keywords: afropessimism, fanon, marxism, postcolonialism

Procedia PDF Downloads 138
485 Sea Surface Trend over the Arabian Sea and Its Influence on the South West Monsoon Rainfall Variability over Sri Lanka

Authors: Sherly Shelton, Zhaohui Lin

Abstract:

In recent decades, the inter-annual variability of summer precipitation over the India and Sri Lanka has intensified significantly with an increased frequency of both abnormally dry and wet summers. Therefore prediction of the inter-annual variability of summer precipitation is crucial and urgent for water management and local agriculture scheduling. However, none of the hypotheses put forward so far could understand the relationship to monsoon variability and related factors that affect to the South West Monsoon (SWM) variability in Sri Lanka. This study focused to identify the spatial and temporal variability of SWM rainfall events from June to September (JJAS) over Sri Lanka and associated trend. The monthly rainfall records covering 1980-2013 over the Sri Lanka are used for 19 stations to investigate long-term trends in SWM rainfall over Sri Lanka. The linear trends of atmospheric variables are calculated to understand the drivers behind the changers described based on the observed precipitation, sea surface temperature and atmospheric reanalysis products data for 34 years (1980–2013). Empirical orthogonal function (EOF) analysis was applied to understand the spatial and temporal behaviour of seasonal SWM rainfall variability and also investigate whether the trend pattern is the dominant mode that explains SWM rainfall variability. The spatial and stations based precipitation over the country showed statistically insignificant decreasing trends except few stations. The first two EOFs of seasonal (JJAS) mean of rainfall explained 52% and 23 % of the total variance and first PC showed positive loadings of the SWM rainfall for the whole landmass while strongest positive lording can be seen in western/ southwestern part of the Sri Lanka. There is a negative correlation (r ≤ -0.3) between SMRI and SST in the Arabian Sea and Central Indian Ocean which indicate that lower temperature in the Arabian Sea and Central Indian Ocean are associated with greater rainfall over the country. This study also shows that consistently warming throughout the Indian Ocean. The result shows that the perceptible water over the county is decreasing with the time which the influence to the reduction of precipitation over the area by weakening drawn draft. In addition, evaporation is getting weaker over the Arabian Sea, Bay of Bengal and Sri Lankan landmass which leads to reduction of moisture availability required for the SWM rainfall over Sri Lanka. At the same time, weakening of the SST gradients between Arabian Sea and Bay of Bengal can deteriorate the monsoon circulation, untimely which diminish SWM over Sri Lanka. The decreasing trends of moisture, moisture transport, zonal wind, moisture divergence with weakening evaporation over Arabian Sea, during the past decade having an aggravating influence on decreasing trends of monsoon rainfall over the Sri Lanka.

Keywords: Arabian Sea, moisture flux convergence, South West Monsoon, Sri Lanka, sea surface temperature

Procedia PDF Downloads 119
484 Insights into Child Malnutrition Dynamics with the Lens of Women’s Empowerment in India

Authors: Bharti Singh, Shri K. Singh

Abstract:

Child malnutrition is a multifaceted issue that transcends geographical boundaries. Malnutrition not only stunts physical growth but also leads to a spectrum of morbidities and child mortality. It is one of the leading causes of death (~50 %) among children under age five. Despite economic progress and advancements in healthcare, child malnutrition remains a formidable challenge for India. The objective is to investigate the impact of women's empowerment on child nutrition outcomes in India from 2006 to 2021. A composite index of women's empowerment was constructed using Confirmatory Factor Analysis (CFA), a rigorous technique that validates the measurement model by assessing how well-observed variables represent latent constructs. This approach ensures the reliability and validity of the empowerment index. Secondly, kernel density plots were utilised to visualise the distribution of key nutritional indicators, such as stunting, wasting, and overweight. These plots offer insights into the shape and spread of data distributions, aiding in understanding the prevalence and severity of malnutrition. Thirdly, linear polynomial graphs were employed to analyse how nutritional parameters evolved with the child's age. This technique enables the visualisation of trends and patterns over time, allowing for a deeper understanding of nutritional dynamics during different stages of childhood. Lastly, multilevel analysis was conducted to identify vulnerable levels, including State-level, PSU-level, and household-level factors impacting undernutrition. This approach accounts for hierarchical data structures and allows for the examination of factors at multiple levels, providing a comprehensive understanding of the determinants of child malnutrition. Overall, the utilisation of these statistical methodologies enhances the transparency and replicability of the study by providing clear and robust analytical frameworks for data analysis and interpretation. Our study reveals that NFHS-4 and NFHS-5 exhibit an equal density of severely stunted cases. NFHS-5 indicates a limited decline in wasting among children aged five, while the density of severely wasted children remains consistent across NFHS-3, 4, and 5. In 2019-21, women with higher empowerment had a lower risk of their children being undernourished (Regression coefficient= -0.10***; Confidence Interval [-0.18, -0.04]). Gender dynamics also play a significant role, with male children exhibiting a higher susceptibility to undernourishment. Multilevel analysis suggests household-level vulnerability (intra-class correlation=0.21), highlighting the need to address child undernutrition at the household level.

Keywords: child nutrition, India, NFHS, women’s empowerment

Procedia PDF Downloads 22
483 Study on Metabolic and Mineral Balance, Oxidative Stress and Cardiovascular Risk Factors in Type 2 Diabetic Patients on Different Therapy

Authors: E. Nemes-Nagy, E. Fogarasi, M. Croitoru, A. Nyárádi, K. Komlódi, S. Pál, A. Kovács, O. Kopácsy, R. Tripon, Z. Fazakas, C. Uzun, Z. Simon-Szabó, V. Balogh-Sămărghițan, E. Ernő Nagy, M. Szabó, M. Tilinca

Abstract:

Intense oxidative stress, increased glycated hemoglobin and mineral imbalance represent risk factors for complications in diabetic patients. Cardiovascular complications are most common in these patients, including nephropathy. This study was conducted in 2015 at the Procardia Laboratory in Tîrgu Mureș, Romania on 40 type 2 diabetic adults. Routine biochemical tests were performed on the Konleab 20XTi analyzer (serum glucose, total cholesterol, LDL and HDL cholesterol, triglyceride, creatinine, urea). We also measured serum uric acid, magnesium and calcium concentration by photometric procedures, potassium, sodium and chloride by ion selective electrode, and chromium by atomic absorption spectrometry in a group of patients. Glycated hemoglobin (HbA1c) dosage was made by reflectometry. Urine analysis was performed using the HandUReader equipment. The level of oxidative stress was measured by serum malondialdehyde dosage using the thiobarbituric acid reactive substances method. MDRD (Modification of Diet in Renal Disease) formula was applied for calculation of creatinine-derived glomerular filtration rate. GraphPad InStat software was used for statistical analysis of the data. The diabetic subject included in the study presented high MDA concentrations, showing intense oxidative stress. Calcium was deficient in 5% of the patients, chromium deficiency was present in 28%. The atherogenic cholesterol fraction was elevated in 13% of the patients. Positive correlation was found between creatinine and MDRD-creatinine values (p<0.0001), 68% of the patients presented increased creatinine values. The majority of the diabetic patients had good control of their diabetes, having optimal HbA1c values, 35% of them presented fasting serum glucose over 120 mg/dl and 18% had glucosuria. Intense oxidative stress and mineral deficiencies can increase the risk of cardiovascular complications in diabetic patients in spite of their good metabolic balance. More than two third of the patients present biochemical signs of nephropathy, cystatin C dosage and microalbuminuria could reveal better the kidney disorder, but glomerular filtration rate calculation formulas are also useful for evaluation of renal function.

Keywords: cardiovascular risk, homocysteine, malondialdehyde, metformin, minerals, type 2 diabetes, vitamin B12

Procedia PDF Downloads 304
482 Evaluation of Differential Interaction between Flavanols and Saliva Proteins by Diffusion and Precipitation Assays on Cellulose Membranes

Authors: E. Obreque-Slier, V. Contreras-Cortez, R. López-Solís

Abstract:

Astringency is a drying, roughing, and sometimes puckering sensation that is experienced on the various oral surfaces during or immediately after tasting foods. This sensation has been closely related to the interaction and precipitation between salivary proteins and polyphenols, specifically flavanols or proanthocyanidins. In addition, the type and concentration of proanthocyanidin influences significantly the intensity of the astringency and consequently the protein/proanthocyanidin interaction. However, most of the studies are based on the interaction between saliva and highly complex polyphenols, without considering the effect of monomeric proanthoancyanidins present in different foods. The aim of this study was to evaluate the effect of different monomeric proanthocyanidins on the diffusion and precipitation of salivary proteins. Thus, solutions of catechin, epicatechin, epigallocatechin and gallocatechin (0, 2.0, 4.0, 6.0, 8.0 and 10 mg/mL) were mixed with human saliva (1: 1 v/v). After incubation for 5 min at room temperature, 15 µL aliquots of each mix were dotted on a cellulose membrane and allowed to dry spontaneously at room temperature. The membrane was fixed, rinsed and stained for proteins with Coomassie blue. After exhaustive washing in 7% acetic acid, the membrane was rinsed once in distilled water and dried under a heat lamp. Both diffusion area and stain intensity of the protein spots were semiqualitative estimates for protein-tannin interaction (diffusion test). The rest of the whole saliva-phenol solution mixtures of the diffusion assay were centrifuged, and 15-μL aliquots from each of the supernatants were dotted on a cellulose membrane. The membrane was processed for protein staining as indicated above. The blue-stained area of protein distribution corresponding to each of the extract dilution-saliva mixtures was quantified by Image J 1.45 software. Each of the assays was performed at least three times. Initially, salivary proteins display a biphasic distribution on cellulose membranes, that is, when aliquots of saliva are placed on absorbing cellulose membranes, and free diffusion of saliva is allowed to occur, a non-diffusible protein fraction becomes surrounded by highly diffusible salivary proteins. In effect, once diffusion has ended, a protein-binding dye shows an intense blue-stained roughly circular area close to the spotting site (non-diffusible fraction) (NDF) which becomes surrounded by a weaker blue-stained outer band (diffusible fraction) (DF). Likewise, the diffusion test showed that epicatechin caused the complete disappearance of DF from saliva with 2 mg/mL. Also, epigallocatechin and gallocatechin caused a similar effect with 4 mg/mL, while catechin generated the same effect at 8 mg/mL. In the precipitation test, the use of epicatechin and gallocatechin generated evident precipitates at the bottom of the Eppendorf tubes. In summary, the flavanol type differentially affects the diffusion and precipitation of saliva, which would affect the sensation of astringency perceived by consumers.

Keywords: astringency, polyphenols, tannins, tannin-protein interaction

Procedia PDF Downloads 183
481 The Quantum Theory of Music and Languages

Authors: Mballa Abanda Serge, Henda Gnakate Biba, Romaric Guemno Kuate, Akono Rufine Nicole, Petfiang Sidonie, Bella Sidonie

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization, It designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and world music or variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, entanglement, langauge, science

Procedia PDF Downloads 62
480 Exploring Neural Responses to Urban Spaces in Older People Using Mobile EEG

Authors: Chris Neale, Jenny Roe, Peter Aspinall, Sara Tilley, Steve Cinderby, Panos Mavros, Richard Coyne, Neil Thin, Catharine Ward Thompson

Abstract:

This research directly assesses older people’s neural activation in response to walking through a changing urban environment, as measured by electroencephalography (EEG). As the global urban population is predicted to grow, there is a need to understand the role that the urban environment may play on the health of its older inhabitants. There is a large body of evidence suggesting green space has a beneficial restorative effect, but this effect remains largely understudied in both older people and by using a neuroimaging assessment. For this study, participants aged 65 years and over were required to walk between a busy urban built environment and a green urban environment, in a counterbalanced design, wearing an Emotiv EEG headset to record real-time neural responses to place. Here we report on the outputs for these responses derived from both the proprietary Affectiv Suite software, which creates emotional parameters with a real time value assigned to them, as well as the raw EEG output focusing on alpha and beta changes, associated with changes in relaxation and attention respectively. Each walk lasted around fifteen minutes and was undertaken at the natural walking pace of the participant. The two walking environments were compared using a form of high dimensional correlated component regression (CCR) on difference data between the urban busy and urban green spaces. For the Emotiv parameters, results showed that levels of ‘engagement’ increased in the urban green space (with a subsequent decrease in the urban busy built space) whereas levels of ‘excitement’ increased in the urban busy environment (with a subsequent decrease in the urban green space). In the raw data, low beta (13 – 19 Hz) increased in the urban busy space with a subsequent decrease shown in the green space, similar to the pattern shown with the ‘excitement’ result. Alpha activity (9 – 13 Hz) shows a correlation with low beta, but not with dependent change in the regression model. This suggests that alpha is acting as a suppressor variable. These results suggest that there are neural signatures associated with the experience of urban spaces which may reflect the age of the cohort or the spatiality of the settings themselves. These are shown both in the outputs of the proprietary software as well as the raw EEG output. Built busy urban spaces appear to induce neural activity associated with vigilance and low level stress, while this effect is ameliorated in the urban green space, potentially suggesting a beneficial effect on attentional capacity in urban green space in this participant group. The interaction between low beta and alpha requires further investigation, in particular the role of alpha in this relationship.

Keywords: ageing, EEG, green space, urban space

Procedia PDF Downloads 204
479 Time-Domain Nuclear Magnetic Resonance as a Potential Analytical Tool to Assess Thermisation in Ewe's Milk

Authors: Alessandra Pardu, Elena Curti, Marco Caredda, Alessio Dedola, Margherita Addis, Massimo Pes, Antonio Pirisi, Tonina Roggio, Sergio Uzzau, Roberto Anedda

Abstract:

Some of the artisanal cheeses products of European Countries certificated as PDO (Protected Designation of Origin) are made from raw milk. To recognise potential frauds (e.g. pasteurisation or thermisation of milk aimed at raw milk cheese production), the alkaline phosphatase (ALP) assay is currently applied only for pasteurisation, although it is known to have notable limitations for the validation of ALP enzymatic state in nonbovine milk. It is known that frauds considerably impact on customers and certificating institutions, sometimes resulting in a damage of the product image and potential economic losses for cheesemaking producers. Robust, validated, and univocal analytical methods are therefore needed to allow Food Control and Security Organisms, to recognise a potential fraud. In an attempt to develop a new reliable method to overcome this issue, Time-Domain Nuclear Magnetic Resonance (TD-NMR) spectroscopy has been applied in the described work. Daily fresh milk was analysed raw (680.00 µL in each 10-mm NMR glass tube) at least in triplicate. Thermally treated samples were also produced, by putting each NMR tube of fresh raw milk in water pre-heated at temperatures from 68°C up to 72°C and for up to 3 min, with continuous agitation, and quench-cooled to 25°C in a water and ice solution. Raw and thermally treated samples were analysed in terms of 1H T2 transverse relaxation times with a CPMG sequence (Recycle Delay: 6 s, interpulse spacing: 0.05 ms, 8000 data points) and quasi-continuous distributions of T2 relaxation times were obtained by CONTIN analysis. In line with previous data collected by high field NMR techniques, a decrease in the spin-spin relaxation constant T2 of the predominant 1H population was detected in heat-treated milk as compared to raw milk. The decrease of T2 parameter is consistent with changes in chemical exchange and diffusive phenomena, likely associated to changes in milk protein (i.e. whey proteins and casein) arrangement promoted by heat treatment. Furthermore, experimental data suggest that molecular alterations are strictly dependent on the specific heat treatment conditions (temperature/time). Such molecular variations in milk, which are likely transferred to cheese during cheesemaking, highlight the possibility to extend the TD-NMR technique directly on cheese to develop a method for assessing a fraud related to the use of a milk thermal treatment in PDO raw milk cheese. Results suggest that TDNMR assays might pave a new way to the detailed characterisation of heat treatments of milk.

Keywords: cheese fraud, milk, pasteurisation, TD-NMR

Procedia PDF Downloads 224
478 Impact of Helicobacter pylori Infection on Colorectal Adenoma-Colorectal Carcinoma Sequence

Authors: Jannis Kountouras, Nikolaos Kapetanakis, Stergios A. Polyzos, Apostolis Papaeftymiou, Panagiotis Katsinelos, Ioannis Venizelos, Christina Nikolaidou, Christos Zavos, Iordanis Romiopoulos, Elena Tsiaousi, Evangelos Kazakos, Michael Doulberis

Abstract:

Background & Aims: Helicobacter pylori infection (Hp-I) has been recognized as a substantial risk agent involved in gastrointestinal (GI) tract oncogenesis by stimulating cancer stem cells (CSCs), oncogenes, immune surveillance processes, and triggering GI microbiota dysbiosis. We aimed to investigate the possible involvement of active Hp-I in the sequence: chronic inflammation–adenoma–colorectal cancer (CRC) development. Methods: Four pillars were investigated: (i) endoscopic and conventional histological examinations of patients with CRC, colorectal adenomas (CRA) versus controls to detect the presence of active Hp-I; (ii) immunohistochemical determination of the presence of Hp; expression of CD44, an indicator of CSCs and/or bone marrow-derived stem cells (BMDSCs); expressions of oncogene Ki67 and anti-apoptotic Bcl-2 protein; (iii) expression of CD45, indicator of immune surveillance locally (assessing mainly T and B lymphocytes locally); and (iv) correlation of the studied parameters with the presence or absence of Hp-I. Results: Among 50 patients with CRC, 25 with CRA, and 10 controls, a significantly higher presence of Hp-I in the CRA (68%) and CRC group (84%) were found compared with controls (30%). The presence of Hp-I with accompanying immunohistochemical expression of CD44 in biopsy specimens was revealed in a high proportion of patients with CRA associated with moderate/severe dysplasia (88%) and CRC patients with moderate/severe degree of malignancy (91%). Comparable results were also obtained for Ki67, Bcl-2, and CD45 immunohistochemical expressions. Concluding Remarks: Hp-I seems to be involved in the sequence: CRA – dysplasia – CRC, similarly to the upper GI tract oncogenesis, by several pathways such as the following: Beyond Hp-I associated insulin resistance, the major underlying mechanism responsible for the metabolic syndrome (MetS) that increase the risk of colorectal neoplasms, as implied by other Hp-I related MetS pathologies, such as non-alcoholic fatty liver disease and upper GI cancer, the disturbance of the normal GI microbiota (i.e., dysbiosis) and the formation of an irritative biofilm could contribute to a perpetual inflammatory upper GIT and colon mucosal damage, stimulating CSCs or recruiting BMDSCs and affecting oncogenes and immune surveillance processes. Further large-scale relative studies with a pathophysiological perspective are necessary to demonstrate in-depth this relationship.

Keywords: Helicobacter pylori, colorectal cancer, colorectal adenomas, gastrointestinal oncogenesis

Procedia PDF Downloads 128
477 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 100
476 Stable Diffusion, Context-to-Motion Model to Augmenting Dexterity of Prosthetic Limbs

Authors: André Augusto Ceballos Melo

Abstract:

Design to facilitate the recognition of congruent prosthetic movements, context-to-motion translations guided by image, verbal prompt, users nonverbal communication such as facial expressions, gestures, paralinguistics, scene context, and object recognition contributes to this process though it can also be applied to other tasks, such as walking, Prosthetic limbs as assistive technology through gestures, sound codes, signs, facial, body expressions, and scene context The context-to-motion model is a machine learning approach that is designed to improve the control and dexterity of prosthetic limbs. It works by using sensory input from the prosthetic limb to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. This can help to improve the performance of the prosthetic limb and make it easier for the user to perform a wide range of tasks. There are several key benefits to using the context-to-motion model for prosthetic limb control. First, it can help to improve the naturalness and smoothness of prosthetic limb movements, which can make them more comfortable and easier to use for the user. Second, it can help to improve the accuracy and precision of prosthetic limb movements, which can be particularly useful for tasks that require fine motor control. Finally, the context-to-motion model can be trained using a variety of different sensory inputs, which makes it adaptable to a wide range of prosthetic limb designs and environments. Stable diffusion is a machine learning method that can be used to improve the control and stability of movements in robotic and prosthetic systems. It works by using sensory feedback to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. One key aspect of stable diffusion is that it is designed to be robust to noise and uncertainty in the sensory feedback. This means that it can continue to produce stable, smooth movements even when the sensory data is noisy or unreliable. To implement stable diffusion in a robotic or prosthetic system, it is typically necessary to first collect a dataset of examples of the desired movements. This dataset can then be used to train a machine learning model to predict the appropriate control inputs for a given set of sensory observations. Once the model has been trained, it can be used to control the robotic or prosthetic system in real-time. The model receives sensory input from the system and uses it to generate control signals that drive the motors or actuators responsible for moving the system. Overall, the use of the context-to-motion model has the potential to significantly improve the dexterity and performance of prosthetic limbs, making them more useful and effective for a wide range of users Hand Gesture Body Language Influence Communication to social interaction, offering a possibility for users to maximize their quality of life, social interaction, and gesture communication.

Keywords: stable diffusion, neural interface, smart prosthetic, augmenting

Procedia PDF Downloads 87
475 Oxidative Stability of Corn Oil Supplemented with Natural Antioxidants from Cypriot Salvia fruticosa Extracts

Authors: Zoi Konsoula

Abstract:

Vegetable oils, which are rich in polyunsaturated fatty acids, are susceptible to oxidative deterioration. The lipid oxidation of oils results in the production of rancid odors and unpleasant flavors as well as the reduction of their nutritional quality and safety. Traditionally, synthetic antioxidants are employed for their retardation or prevention of oxidative deterioration of oils. However, these compounds are suspected to pose health hazards. Consequently, recently there has been a growing interest in the use of natural antioxidants of plant origin for improving the oxidative stability of vegetable oils. The genus Salvia (sage) is well known for its antioxidant activity. In the Cypriot flora Salvia fruticosa is the most distributed indigenous Salvia species. In the present study, extracts were prepared from S. fruticosa aerial parts using various solvents and their antioxidant activity was evaluated by the 1,1-diphenyl-2-picrylhydrazine (DPPH) radical scavenging and Ferric Reducing Antioxidant Power (FRAP) method. Moreover, the antioxidant efficacy of all extracts was assessed using corn oil as the oxidation substrate, which was subjected to accelerated aging (60 °C, 30 days). The progress of lipid oxidation was monitored by the determination of the peroxide, p-aniside, conjugated dienes and trienes value according to the official AOCS methods. Synthetic antioxidants (butylated hydroxytoluene-BHT and butylated hydroxyanisole-BHA) were employed at their legal limit (200 ppm) as reference. Finally, the total phenolic (TPC) and flavonoid content (TFC) of the prepared extracts was measured by the Folin-Ciocalteu and aluminum-flavonoid complex method, respectively. The results of the present study revealed that although all sage extracts prepared from S. fruticosa exhibited antioxidant activity, the highest antioxidant capacity was recorded in the methanolic extract, followed by the non-toxic, food grade ethanol. Furthermore, a positive correlation between the antioxidant potency and the TPC of extracts was observed in all cases. Interestingly, sage extracts prevented lipid oxidation in corn oil at all concentrations tested, however, the magnitude of stabilization was dose dependent. More specifically, results from the different oxidation parameters were in agreement with each other and indicated that the protection offered by the various extracts depended on their TPC. Among the extracts, the methanolic extract was more potent in inhibiting oxidative deterioration. Finally, both methanolic and ethanolic sage extracts at a concentration of 1000 ppm exerted a stabilizing effect comparable to that of the reference synthetic antioxidants. Based on the results of the present study, sage extracts could be used for minimizing or preventing lipid oxidation in oils and, thus, prolonging their shelf-life. In particular, given that the use of dietary alcohol, such as ethanol, is preferable than methanol in food applications, the ethanolic extract prepared from S. fruticosa could be used as an alternative natural antioxidant.

Keywords: antioxidant activity, corn oil, oxidative deterioration, sage

Procedia PDF Downloads 177
474 Survey of Prevalence of Noise Induced Hearing Loss in Hawkers and Shopkeepers in Noisy Areas of Mumbai City

Authors: Hitesh Kshayap, Shantanu Arya, Ajay Basod, Sachin Sakhuja

Abstract:

This study was undertaken to measure the overall noise levels in different locations/zones and to estimate the prevalence of Noise induced hearing loss in Hawkers & Shopkeepers in Mumbai, India. The Hearing Test developed by American Academy Of Otolaryngology, translated from English to Hindi, and validated is used as a screening tool for hearing sensitivity was employed. The tool is having 14 items. Each item is scored on a scale 0, 1, 2 and 3. The score 6 and above indicated some difficulty or definite difficulty in hearing in daily activities and low score indicated lesser difficulty or normal hearing. The subjects who scored 6 or above or having tinnitus were made to undergo hearing evaluation by Pure tone audiometer. Further, the environmental noise levels were measured from Morning to Evening at road side at different Location/Hawking zones in Mumbai city using SLM9 Agronic 8928B & K type Digital Sound Level Meter) in dB (A). The maximum noise level of 100.0 dB (A) was recorded during evening hours from Chattrapati Shivaji Terminal to Colaba with overall noise level of 79.0 dB (A). However, the minimum noise level in this area was 72.6 dB (A) at any given point of time. Further, 54.6 dB (A) was recorded as minimum noise level during 8-9 am at Sion Circle. Further, commencement of flyovers with 2-tier traffic, sky walks, increasing number of vehicular traffic at road, high rise buildings and other commercial & urbanization activities in the Mumbai city most probably have resulted in increasing the overall environmental noise levels. Trees which acted as noise absorbers have been cut owing to rapid construction. The study involved 100 participants in the age range of 18 to 40 years of age, with the mean age of 29 years (S.D. =6.49). 46 participants having tinnitus or have obtained the score of 6 were made to undergo Pure Tone Audiometry and it was found that the prevalence rate of hearing loss in hawkers & shopkeepers is 19% (10% Hawkers and 9 % Shopkeepers). The results found indicates that 29 (42.6%) out of 64 Hawkers and 17 (47.2%) out of 36 Shopkeepers who underwent PTA had no significant difference in percentage of Noise Induced Hearing loss. The study results also reveal that participants who exhibited tinnitus 19 (41.30%) out of 46 were having mild to moderate sensorineural hearing loss between 3000Hz to 6000Hz. The Pure tone Audiogram pattern revealed Hearing loss at 4000 Hz and 6000 Hz while hearing at adjacent frequencies were nearly normal. 7 hawkers and 8 shopkeepers had mild notch while 3 hawkers and 1 shopkeeper had a moderate degree of notch. It is thus inferred that tinnitus is a strong indicator for presence of hearing loss and 4/6 KHz notch is a strong marker for road/traffic/ environmental noise as an occupational hazard for hawkers and shopkeepers. Mass awareness about these occupational hazards, regular hearing check up, early intervention along with sustainable development juxtaposed with social and urban forestry can help in this regard.

Keywords: NIHL, noise, sound level meter, tinnitus

Procedia PDF Downloads 177
473 Ammonia Bunkering Spill Scenarios: Modelling Plume’s Behaviour and Potential to Trigger Harmful Algal Blooms in the Singapore Straits

Authors: Bryan Low

Abstract:

In the coming decades, the global maritime industry will face a most formidable environmental challenge -achieving net zero carbon emissions by 2050. To meet this target, the Maritime Port Authority of Singapore (MPA) has worked to establish green shipping and digital corridors with ports of several other countries around the world where ships will use low-carbon alternative fuels such as ammonia for power generation. While this paradigm shift to the bunkering of greener fuels is encouraging, fuels like ammonia will also introduce a new and unique type of environmental risk in the unlikely scenario of a spill. While numerous modelling studies have been conducted for oil spills and their associated environmental impact on coastal and marine ecosystems, ammonia spills are comparatively less well understood. For example, there is a knowledge gap regarding how the complex hydrodynamic conditions of the Singapore Straits may influence the dispersion of a hypothetical ammonia plume, which has different physical and chemical properties compared to an oil slick. Chemically, ammonia can be absorbed by phytoplankton, thus altering the balance of the marine nitrogen cycle. Biologically, ammonia generally serves the role of a nutrient in coastal ecosystems at lower concentrations. However, at higher concentrations, it has been found to be toxic to many local species. It may also have the potential to trigger eutrophication and harmful algal blooms (HABs) in coastal waters, depending on local hydrodynamic conditions. Thus, the key objective of this research paper is to support the development of a model-based forecasting system that can predict ammonia plume behaviour in coastal waters, given prevailing hydrodynamic conditions and their environmental impact. This will be essential as ammonia bunkering becomes more commonplace in Singapore’s ports and around the world. Specifically, this system must be able to assess the HAB-triggering potential of an ammonia plume, as well as its lethal and sub-lethal toxic effects on local species. This will allow the relevant authorities to better plan risk mitigation measures or choose a time window with the ideal hydrodynamic conditions to conduct ammonia bunkering operations with minimal risk. In this paper, we present the first part of such a forecasting system: a jointly coupled hydrodynamic-water quality model that can capture how advection-diffusion processes driven by ocean currents influence plume behaviour and how the plume interacts with the marine nitrogen cycle. The model is then applied to various ammonia spill scenarios where the results are discussed in the context of current ammonia toxicity guidelines, impact on local ecosystems, and mitigation measures for future bunkering operations conducted in the Singapore Straits.

Keywords: ammonia bunkering, forecasting, harmful algal blooms, hydrodynamics, marine nitrogen cycle, oceanography, water quality modeling

Procedia PDF Downloads 54
472 Risks beyond Cyber in IoT Infrastructure and Services

Authors: Mattias Bergstrom

Abstract:

Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.

Keywords: IoT, security, infrastructure, SCADA, blockchain, AI

Procedia PDF Downloads 87