Search results for: Applied linguistics
1049 Fake news and Conspiracy Narratives in the Covid-19 Crisis: An International Comparison
Authors: Caja Thimm
Abstract:
Already well before the Corona pandemic hit the world, ‘fake news‘ were no longer regarded as harmless twists of the truth but as intentionally composed disinformation, often with the goal of manipulative populist propaganda. During the Corona crisis, particularly conspiracy narratives have become a worldwide phenomenon with dangerous consequences (anti vaccination myths). The success of these manipulated news need s to be counteracted by trustworthy news, which in Europe particularly includes public broadcasting media and their social media channels. To understand better how the main public broadcasters in Germany, the UK, and France used Instagram strategically, a comparative study was carried out. The study – comparative analysis of Instagram during the Corona Crisis In our empirical study, we compared the activities by selected formats during the Corona crisis in order to see how the public broadcasters reached their audiences and how this might, in the longer run, affect journalistic strategies on social media platforms. First analysis showed that the increase in the use of social media overall was striking. Almost one in two adult online users (48 %) obtained information about the virus in social media, and in total, 38% of the younger age group (18-24) looked for Covid19 information on Instagram, so the platform can be regarded as one of the central digital spaces for Corona related information searches. Quantitative measures showed that 47% of recent posts by the broadcasters were related to Corona, and 7% treated conspiracy myths. For the more detailed content analysis, the following categories of analysis were applied: • Digital storytelling and instastories • Textuality and semantic keys • links to information • stickers • videochat • fact checking • news ticker • service • infografics and animated tables Additionally to these basic features, we particularly looked for new formats created during the crisis. Journalistic use of social media platforms opens up immediate and creative ways of applying the media logics of the respective platforms, and particularly the BBC and ARD formats proved to be interactive, responsive, and entertaining. Among them were new formats such as a space for user questions and personal uploads, interviews, music, comedy, etc. Particularly the fact checking channel got a lot of attention, as many user questions were focused on the conspiracy theories, which dominated the public discourse during many weeks in 2020. In the presentation, we will introduce eight particular strategies that show how public broadcasting journalism can adopt digital platforms and use them creatively and, hence help to counteract against conspiracy narratives and fake news.Keywords: fake news, social media, digital journalism, digital methods
Procedia PDF Downloads 1561048 Floods Hazards and Emergency Respond in Negara Brunei Darussalam
Authors: Hj Mohd Sidek bin Hj Mohd Yusof
Abstract:
More than 1.5 billion people around the world are adversely affected by floods. Floods account for about a third of all natural catastrophes, cause more than half of all fatalities and are responsible for a third of overall economic loss around the world. Giving advanced warning of impending disasters can reduce or even avoid the number of deaths, social and economic hardships that are so commonly reported after the event. Integrated catchment management recognizes that it is not practical or viable to provide structural measures that will keep floodwater away from the community and their property. Non-structural measures are therefore required to assist the community to cope when flooding occurs which exceeds the capacity of the structural measures. Non-structural measures may need to be used to influence the way land is used or buildings are constructed, or they may be used to improve the community’s preparedness and response to flooding. The development and implementation of non-structural measures may be guided and encouraged by policy and legislation, or through voluntary action by the community based on knowledge gained from public education programs. There is a range of non-structural measures that can be used for flood hazard mitigation which can be the use measures includes policies and rules applied by government to regulate the kinds of activities that are carried out in various flood-prone areas, including minimum floor levels and the type of development approved. Voluntary actions taken by the authorities and by the community living and working on the flood plain to lessen flooding effects on themselves and their properties including monitoring land use changes, monitoring and investigating the effects of bush / forest clearing in the catchment and providing relevant flood related information to the community. Response modification measures may include: flood warning system, flood education, community awareness and readiness, evacuation arrangements and recovery plan. A Civil Defense Emergency Management needs to be established for Brunei Darussalam in order to plan, co-ordinate and undertake flood emergency management. This responsibility may be taken by the Ministry of Home Affairs, Brunei Darussalam who is already responsible for Fire Fighting and Rescue services. Several pieces of legislation and planning instruments are in place to assist flood management, particularly: flood warning system, flood education Community awareness and readiness, evacuation arrangements and recovery plan.Keywords: RTB, radio television brunei, DDMC, district disaster management center, FIR, flood incidence report, PWD, public works department
Procedia PDF Downloads 2561047 A Matched Case-Control Study to Asses the Association of Chikunguynya Severity among Blood Groups and Other Determinants in Tesseney, Gash Barka Zone, Eritrea
Authors: Ghirmay Teklemicheal, Samsom Mehari, Sara Tesfay
Abstract:
Objectives: A total of 1074 suspected chikungunya cases were reported in Tesseney Province, Gash Barka region, Eritrea, during an outbreak. This study was aimed to assess the possible association of chikungunya severity among ABO blood groups and other potential determinants. Methods: A sex-matched and age-matched case-control study was conducted during the outbreak. For each case, one control subject had been selected from the mild Chikungunya cases. Along the same line of argument, a second control subject had also been designated through which neighborhood of cases were analyzed, scrutinized, and appeared to the scheme of comparison. Time is always the most sacrosanct element in pursuance of any study. According to the temporal calculation, this study was pursued from October 15, 2018, to November 15, 2018. Coming to the methodological dependability, calculating odds ratios (ORs) and conditional (fixed-effect) logistic regression methods were being applied. As a consequence of this, the data was analyzed and construed on the basis of the aforementioned methodological systems. Results: In this outbreak, 137 severe suspected chikungunya cases and 137 mild chikungunya suspected patients, and 137 controls free of chikungunya from the neighborhood of cases were analyzed. Non-O individuals compared to those with O blood group indicated as significant with a p-value of 0.002. Separate blood group comparison among A and O blood groups reflected as significant with a p-value of 0.002. However, there was no significant difference in the severity of chikungunya among B, AB, and O blood groups with a p-value of 0.113 and 0.708, respectively, and a strong association of chikungunya severity was found with hypertension and diabetes (p-value of < 0.0001); whereas, there was no association between chikungunya severity and asthma with a p-value of 0.695 and also no association with pregnancy (p-value =0.881), ventilator (p-value =0.181), air conditioner (p-value = 0.247), and didn’t use latrine and pit latrine (p-value = 0.318), among individuals using septic and pit latrine (p-value = 0.567) and also among individuals using flush and pit latrine (p-value = 0.194). Conclusions: Non- O blood groups were found to be at risk more than their counterpart O blood group individuals with severe form of chikungunya disease. By the same token, individuals with chronic disease were more prone to severe forms of the disease in comparison with individuals without chronic disease. Prioritization is recommended for patients with chronic diseases and non-O blood group since they are found to be susceptible to severe chikungunya disease. Identification of human cell surface receptor(s) for CHIKV is quite necessary for further understanding of its pathophysiology in humans. Therefore, molecular and functional studies will necessarily be helpful in disclosing the association of blood group antigens and CHIKV infections.Keywords: Chikungunya, Chikungunya virus, disease outbreaks, case-control studies, Eritrea
Procedia PDF Downloads 1631046 Participatory Cartography for Disaster Reduction in Pogreso, Yucatan Mexico
Authors: Gustavo Cruz-Bello
Abstract:
Progreso is a coastal community in Yucatan, Mexico, highly exposed to floods produced by severe storms and tropical cyclones. A participatory cartography approach was conducted to help to reduce floods disasters and assess social vulnerability within the community. The first step was to engage local authorities in risk management to facilitate the process. Two workshop were conducted, in the first, a poster size printed high spatial resolution satellite image of the town was used to gather information from the participants: eight women and seven men, among them construction workers, students, government employees and fishermen, their ages ranged between 23 and 58 years old. For the first task, participants were asked to locate emblematic places and place them in the image to familiarize with it. Then, they were asked to locate areas that get flooded, the buildings that they use as refuges, and to list actions that they usually take to reduce vulnerability, as well as to collectively come up with others that might reduce disasters. The spatial information generated at the workshops was digitized and integrated into a GIS environment. A printed version of the map was reviewed by local risk management experts, who validated feasibility of proposed actions. For the second workshop, we retrieved the information back to the community for feedback. Additionally a survey was applied in one household per block in the community to obtain socioeconomic, prevention and adaptation data. The information generated from the workshops was contrasted, through T and Chi Squared tests, with the survey data in order to probe the hypothesis that poorer or less educated people, are less prepared to face floods (more vulnerable) and live near or among higher presence of floods. Results showed that a great majority of people in the community are aware of the hazard and are prepared to face it. However, there was not a consistent relationship between regularly flooded areas with people’s average years of education, house services, or house modifications against heavy rains to be prepared to hazards. We could say that the participatory cartography intervention made participants aware of their vulnerability and made them collectively reflect about actions that can reduce disasters produced by floods. They also considered that the final map could be used as a communication and negotiation instrument with NGO and government authorities. It was not found that poorer and less educated people are located in areas with higher presence of floods.Keywords: climate change, floods, Mexico, participatory mapping, social vulnerability
Procedia PDF Downloads 1131045 Geographical Information System and Multi-Criteria Based Approach to Locate Suitable Sites for Industries to Minimize Agriculture Land Use Changes in Bangladesh
Authors: Nazia Muhsin, Tofael Ahamed, Ryozo Noguchi, Tomohiro Takigawa
Abstract:
One of the most challenging issues to achieve sustainable development on food security is land use changes. The crisis of lands for agricultural production mainly arises from the unplanned transformation of agricultural lands to infrastructure development i.e. urbanization and industrialization. Land use without sustainability assessment could have impact on the food security and environmental protections. Bangladesh, as the densely populated country with limited arable lands is now facing challenges to meet sustainable food security. Agricultural lands are using for economic growth by establishing industries. The industries are spreading from urban areas to the suburban areas and using the agricultural lands. To minimize the agricultural land losses for unplanned industrialization, compact economic zones should be find out in a scientific approach. Therefore, the purpose of the study was to find out suitable sites for industrial growth by land suitability analysis (LSA) by using Geographical Information System (GIS) and multi-criteria analysis (MCA). The goal of the study was to emphases both agricultural lands and industries for sustainable development in land use. The study also attempted to analysis the agricultural land use changes in a suburban area by statistical data of agricultural lands and primary data of the existing industries of the study place. The criteria were selected as proximity to major roads, and proximity to local roads, distant to rivers, waterbodies, settlements, flood-flow zones, agricultural lands for the LSA. The spatial dataset for the criteria were collected from the respective departments of Bangladesh. In addition, the elevation spatial dataset were used from the SRTM (Shuttle Radar Topography Mission) data source. The criteria were further analyzed with factors and constraints in ArcGIS®. Expert’s opinion were applied for weighting the criteria according to the analytical hierarchy process (AHP), a multi-criteria technique. The decision rule was set by using ‘weighted overlay’ tool to aggregate the factors and constraints with the weights of the criteria. The LSA found only 5% of land was most suitable for industrial sites and few compact lands for industrial zones. The developed LSA are expected to help policy makers of land use and urban developers to ensure the sustainability of land uses and agricultural production.Keywords: AHP (analytical hierarchy process), GIS (geographic information system), LSA (land suitability analysis), MCA (multi-criteria analysis)
Procedia PDF Downloads 2631044 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting
Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas
Abstract:
The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation
Procedia PDF Downloads 2451043 Effect of Laser Ablation OTR Films and High Concentration Carbon Dioxide for Maintaining the Freshness of Strawberry ‘Maehyang’ for Export in Modified Atmosphere Condition
Authors: Hyuk Sung Yoon, In-Lee Choi, Min Jae Jeong, Jun Pill Baek, Ho-Min Kang
Abstract:
This study was conducted to improve storability by using suitable laser ablation oxygen transmission rate (OTR) films and effectiveness of high carbon dioxide at strawberry 'Maehyang' for export. Strawberries were grown by hydroponic system in Gyeongsangnam-do province. These strawberries were packed by different laser ablation OTR films (Daeryung Co., Ltd.) such as 1,300 cc, 20,000 cc, 40,000 cc, 80,000 cc, and 100,000 cc•m-2•day•atm. And CO2 injection (30%) treatment was used 20,000 cc•m-2•day•atm OTR film and perforated film was as a control. Temperature conditions were applied simulated shipping and distribution conditions from Korea to Singapore, there were stored at 3 ℃ (13 days), 10 ℃ (an hour), and 8 ℃ (7 days) for 20 days. Fresh weight loss rate was under 1% as maximum permissible weight loss in treated OTR films except perforated film as a control during storage. Carbon dioxide concentration within a package for the storage period showed a lower value than the maximum CO2 concentration tolerated range (15 %) in treated OTR films and even the concentration of high OTR film treatment; from 20,000cc to 100,000cc were less than 3%. 1,300 cc had a suitable carbon dioxide range as over 5 % under 15 % at 5 days after storage until finished experiments and CO2 injection treatment was quickly drop the 15 % at storage after 1 day, but it kept around 15 % during storage. Oxygen concentration was maintained between 10 to 15 % in 1,300 cc and CO2 injection treatments, but other treatments were kept in 19 to 21 %. Ethylene concentration was showed very higher concentration at the CO2 injection treatment than OTR treatments. In the OTR treatments, 1,300 cc showed the highest concentration in ethylene and 20,000 cc film had lowest. Firmness was maintained highest in 1,300cc, but there was not shown any significant differences among other OTR treatments. Visual quality had shown the best result in 20,000 cc that showed marketable quality until 20 days after storage. 20,000 cc and perforated film had better than other treatments in off-odor and the 1,300 cc and CO2 injection treatments have occurred strong off-odor even after 10 minutes. As a result of the difference between Hunter ‘L’ and ‘a’ values of chroma meter, the 1,300cc and CO2 injection treatments were delayed color developments and other treatments did not shown any significant differences. The results indicate that effectiveness for maintaining the freshness was best achieved at 20,000 cc•m-2•day•atm. Although 1,300 cc and CO2 injection treatments were in appropriate MA condition, it showed darkening of strawberry calyx and excessive reduction of coloring due to high carbon dioxide concentration during storage. While 1,300cc and CO2 injection treatments were considered as appropriate treatments for exports to Singapore, but the result was shown different. These results are based on cultivar characteristics of strawberry 'Maehyang'.Keywords: carbon dioxide, firmness, shelf-life, visual quality
Procedia PDF Downloads 3991042 Radiomics: Approach to Enable Early Diagnosis of Non-Specific Breast Nodules in Contrast-Enhanced Magnetic Resonance Imaging
Authors: N. D'Amico, E. Grossi, B. Colombo, F. Rigiroli, M. Buscema, D. Fazzini, G. Cornalba, S. Papa
Abstract:
Purpose: To characterize, through a radiomic approach, the nature of nodules considered non-specific by expert radiologists, recognized in magnetic resonance mammography (MRm) with T1-weighted (T1w) sequences with paramagnetic contrast. Material and Methods: 47 cases out of 1200 undergoing MRm, in which the MRm assessment gave uncertain classification (non-specific nodules), were admitted to the study. The clinical outcome of the non-specific nodules was later found through follow-up or further exams (biopsy), finding 35 benign and 12 malignant. All MR Images were acquired at 1.5T, a first basal T1w sequence and then four T1w acquisitions after the paramagnetic contrast injection. After a manual segmentation of the lesions, done by a radiologist, and the extraction of 150 radiomic features (30 features per 5 subsequent times) a machine learning (ML) approach was used. An evolutionary algorithm (TWIST system based on KNN algorithm) was used to subdivide the dataset into training and validation test and to select features yielding the maximal amount of information. After this pre-processing, different machine learning systems were applied to develop a predictive model based on a training-testing crossover procedure. 10 cases with a benign nodule (follow-up older than 5 years) and 18 with an evident malignant tumor (clear malignant histological exam) were added to the dataset in order to allow the ML system to better learn from data. Results: NaiveBayes algorithm working on 79 features selected by a TWIST system, resulted to be the best performing ML system with a sensitivity of 96% and a specificity of 78% and a global accuracy of 87% (average values of two training-testing procedures ab-ba). The results showed that in the subset of 47 non-specific nodules, the algorithm predicted the outcome of 45 nodules which an expert radiologist could not identify. Conclusion: In this pilot study we identified a radiomic approach allowing ML systems to perform well in the diagnosis of a non-specific nodule at MR mammography. This algorithm could be a great support for the early diagnosis of malignant breast tumor, in the event the radiologist is not able to identify the kind of lesion and reduces the necessity for long follow-up. Clinical Relevance: This machine learning algorithm could be essential to support the radiologist in early diagnosis of non-specific nodules, in order to avoid strenuous follow-up and painful biopsy for the patient.Keywords: breast, machine learning, MRI, radiomics
Procedia PDF Downloads 2671041 Self-Assembling Layered Double Hydroxide Nanosheets on β-FeOOH Nanorods for Reducing Fire Hazards of Epoxy Resin
Abstract:
Epoxy resins (EP), one of the most important thermosetting polymers, is widely applied in various fields due to its desirable properties, such as excellent electrical insulation, low shrinkage, outstanding mechanical stiffness, satisfactory adhesion and solvent resistance. However, like most of the polymeric materials, EP has the fatal drawbacks including inherent flammability and high yield of toxic smoke, which restricts its application in the fields requiring fire safety. So, it is still a challenge and an interesting subject to develop new flame retardants which can not only remarkably improve the flame retardancy, but also render modified resins low toxic gases generation. In recent work, polymer nanocomposites based on nanohybrids that contain two or more kinds of nanofillers have drawn intensive interest, which can realize performance enhancements. The realization of previous hybrids of carbon nanotubes (CNTs) and molybdenum disulfide provides us a novel route to decorate layered double hydroxide (LDH) nanosheets on the surface of β-FeOOH nanorods; the deposited LDH nanosheets can fill the network and promote the work efficiency of β-FeOOH nanorods. Moreover, the synergistic effects between LDH and β-FeOOH can be anticipated to have potential applications in reducing fire hazards of EP composites for the combination of condense-phase and gas-phase mechanism. As reported, β-FeOOH nanorods can act as a core to prepare hybrid nanostructures combining with other nanoparticles through electrostatic attraction through layer-by-layer assembly technique. In this work, LDH nanosheets wrapped β-FeOOH nanorods (LDH-β-FeOOH) hybrids was synthesized by a facile method, with the purpose of combining the characteristics of one dimension (1D) and two dimension (2D), to improve the fire resistance of epoxy resin. The hybrids showed a well dispersion in EP matrix and had no obvious aggregation. Thermogravimetric analysis and cone calorimeter tests confirmed that LDH-β-FeOOH hybrids into EP matrix with a loading of 3% could obviously improve the fire safety of EP composites. The plausible flame retardancy mechanism was explored by thermogravimetric infrared (TG-IR) and X-ray photoelectron spectroscopy. The reasons were concluded: condense-phase and gas-phase. Nanofillers were transferred to the surface of matrix during combustion, which could not only shield EP matrix from external radiation and heat feedback from the fire zone, but also efficiently retard transport of oxygen and flammable pyrolysis.Keywords: fire hazards, toxic gases, self-assembly, epoxy
Procedia PDF Downloads 1731040 The Effects of Geographical and Functional Diversity of Collaborators on Quality of Knowledge Generated
Authors: Ajay Das, Sandip Basu
Abstract:
Introduction: There is increasing recognition that diverse streams of knowledge can often be recombined in novel ways to generate new knowledge. However, knowledge recombination theory has not been applied to examine the effects of collaborator diversity on the quality of knowledge such collaborators produce. This is surprising because one would expect that a collaborative team with certain aspects of diversity should be able to recombine process elements related to knowledge development, which are relatively tacit, but also complementary because of the collaborator’s varying backgrounds. Theory and Hypotheses: We propose to examine two aspects of diversity in the environments of collaborative teams to try and capture such potential recombinations of relatively tacit, process knowledge. The first aspect of diversity in team members’ environments is geographical. Collaborators with more geographical distance between them (perhaps working in different countries) often have more autonomy in the processes they adopt for knowledge development. In the absence of overt monitoring, such collaborators are likely to adopt differing approaches to knowledge development. The sharing of such varying approaches among collaborators is likely to result in greater quality of the common collaborative pursuit. The second aspect is diversity in the work backgrounds of team members. Such diversity can also increase the potential for knowledge recombination. For example, if one or more members are from a manufacturing center (versus all of them being from a purely R&D center), such members will provide unique perspectives on the implementation of innovative ideas. Again, knowledge that has been evaluated from these diverse perspectives is likely to be of a higher quality. In addition to the above aspects of environmental diversity among team members, we also plan to examine the extent to which individual collaborators are in different environments from the primary innovation center of their employing firms. Proposed Methods: We will test our model on a sample of firms in the semiconductor industry. Our level of analysis will be individual patents generated by these firms and the teams involved in the generation of these. Information on manufacturing activities of our sample firms will be obtained from SEMI, a proprietary database of the semiconductor industry, as well as company 10-K reports. Conclusion: We believe that our results will represent a preliminary attempt to understand how various forms of diversity in collaborative teams impact the knowledge development process. Our dependent variable of knowledge quality is important to study since higher values of this variable can not only drive firm performance but the broader development of regions and societies through spillover impacts on future innovation. The results of this study will, therefore, inform future research and practice in innovation, geographical location, and vertical integration.Keywords: innovation, manufacturing strategy, knowledge, diversity
Procedia PDF Downloads 3521039 Glutamine Supplementation and Resistance Traning on Anthropometric Indices, Immunoglobulins, and Cortisol Levels
Authors: Alireza Barari, Saeed Shirali, Ahmad Abdi
Abstract:
Introduction: Exercise has contradictory effects on the immune system. Glutamine supplementation may increase the resistance of the immune system in athletes. The Glutamine is one of the most recognized immune nutrients that as a fuel source, substrate in the synthesis of nucleotides and amino acids and is also known to be part of the antioxidant defense. Several studies have shown that improving glutamine levels in plasma and tissues can have beneficial effects on the function of immune cells such as lymphocytes and neutrophils. This study aimed to investigate the effects of resistance training and training combined with glutamine supplementation to improve the levels of cortisol and immunoglobulin in untrained young men. The research shows that physical training can increase the cytokines in the athlete’s body of course; glutamine can counteract the negative effects of resistance training on immune function and stability of the mast cell membrane. Materials and methods: This semi-experimental study was conducted on 30 male non-athletes. They were randomly divided into three groups: control (no exercise), resistance training, resistance training and glutamine supplementation, respectively. Resistance training for 4 weeks and glutamine supplementation in 0.3 gr/kg/day after practice was applied. The resistance-training program consisted of eight exercises (leg press, lat pull, chest press, squat, seatedrow, abdominal crunch, shoulder press, biceps curl and triceps press down) four times per week. Participants performed 3 sets of 10 repetitions at 60–75% 1-RM. Anthropometry indexes (weight, body mass index, and body fat percentage), oxygen uptake (VO2max) Maximal, cortisol levels of immunoglobulins (IgA, IgG, IgM) were evaluated Pre- and post-test. Results: Results showed four week resistance training with and without glutamine cause significant increase in body weight, BMI and significantly decreased (P < 0/001) in BF. Vo2max also increased in both groups of exercise (P < 0/05) and exercise with glutamine (P < 0/001), such as in both groups significant reduction in IgG (P < 0/05) was observed. But no significant difference observed in levels of cortisol, IgA, IgM in any of the groups. No significant change observed in either parameter in the control group. No significant difference observed between the groups. Discussion: The alterations in the hormonal and immunological parameters can be used in order to assess the effect overload on the body, whether acute or chronically. The plasmatic concentration of glutamine has been associated to the functionality of the immunological system in individuals sub-mitted to intense physical training. resistance training has destructive effects on the immune system and glutamine supplementation cannot neutralize the damaging effects of power exercise on the immune system.Keywords: glutamine, resistance traning, immuglobulins, cortisol
Procedia PDF Downloads 4791038 Functionalization of Carbon-Coated Iron Nanoparticles with Fluorescent Protein
Authors: A. G. Pershina, P. S. Postnikov, M. E. Trusova, D. O. Burlakova, A. E. Sazonov
Abstract:
Invention of magnetic-fluorescent nanocomposites is a rapidly developing area of research. The magnetic-fluorescent nanocomposite attractiveness is connected with the ability of simultaneous management and control of such nanocomposites by two independent methods based on different physical principles. These nanocomposites are applied for the solution of various essential scientific and experimental biomedical problems. The aim of this research is development of principle approach to nanobiohybrid structures with magnetic and fluorescent properties design. The surface of carbon-coated iron nanoparticles (Fe@C) were covalently modified by 4-carboxy benzenediazonium tosylate. Recombinant fluorescent protein TagGFP2 (Eurogen) was obtained in E. coli (Rosetta DE3) by standard laboratory techniques. Immobilization of TagGFP2 on the nanoparticles surface was provided by the carbodiimide activation. The amount of COOH-groups on the nanoparticle surface was estimated by elemental analysis (Elementar Vario Macro) and TGA-analysis (SDT Q600, TA Instruments. Obtained nanocomposites were analyzed by FTIR spectroscopy (Nicolet Thermo 5700) and fluorescence microscopy (AxioImager M1, Carl Zeiss). Amount of the protein immobilized on the modified nanoparticle surface was determined by fluorimetry (Cary Eclipse) and spectrophotometry (Unico 2800) with the help of preliminary obtained calibration plots. In the FTIR spectra of modified nanoparticles the adsorption band of –COOH group around 1700 cm-1 and bands in the region of 450-850 cm-1 caused by bending vibrations of benzene ring were observed. The calculated quantity of active groups on the surface was equal to 0,1 mmol/g of material. The carbodiimide activation of COOH-groups on nanoparticles surface results to covalent immobilization of TagGFP2 fluorescent protein (0.2 nmol/mg). The success of immobilization was proved by FTIR spectroscopy. Protein characteristic adsorption bands in the region of 1500-1600 cm-1 (amide I) were presented in the FTIR spectrum of nanocomposite. The fluorescence microscopy analysis shows that Fe@C-TagGFP2 nanocomposite possesses fluorescence properties. This fact confirms that TagGFP2 protein retains its conformation due to immobilization on nanoparticles surface. Magnetic-fluorescent nanocomposite was obtained as a result of unique design solution implementation – the fluorescent protein molecules were fixed to the surface of superparamagnetic carbon-coated iron nanoparticles using original diazonium salts.Keywords: carbon-coated iron nanoparticles, diazonium salts, fluorescent protein, immobilization
Procedia PDF Downloads 3421037 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection
Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad
Abstract:
The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.Keywords: community detection, electrical segmentation, multiplex graph, power grid
Procedia PDF Downloads 791036 3d Gis Participatory Mapping And Conflict Ladm: Comparative Analysis Of Land Policies And Survey Procedures Applied By The Igorots, Ncip, And Denr To Itogon Ancestral Domain Boundaries
Authors: Deniz A. Apostol, Denyl A. Apostol, Oliver T. Macapinlac, George S. Katigbak
Abstract:
Ang lupa ay buhay at ang buhay ay lupa (land is life and life is land). Based on the 2015 census, the Indigenous Peoples (IPs) population in the Philippines is estimated to be 11.3-20.2 million. They hail from various regions, possess distinct cultures, but encounter shared struggles in territorial disputes. Itogon, the largest Benguet municipality, is home to the Ibaloi, Kankanaey, and other Igorot tribes. Despite having three (3) Ancestral Domains (ADs), Itogon is predominantly labeled as timberland or forest. These overlapping land classifications highlight the presence of inconsistencies in national laws and jurisdictions. This study aims to analyze surveying procedures used by the Igorots, NCIP, and DENR in mapping the Itogon AD Boundaries, show land boundary delineation conflicts, propose surveying guidelines, and recommend 3D Participatory Mapping as geomatics solution for updated AD reference maps. Interpretative Phenomenological Analysis (IPA), Comparative Legal Analysis (CLA), and Map Overlay Analysis (MOA) were utilized to examine the interviews, compare land policies and surveying procedures, and identify differences and overlaps in conflicting land boundaries. In the IPA, master themes identified were AD Definition (rights, responsibilities, restrictions), AD Overlaps (land classifications, political boundaries, ancestral domains, land laws/policies), and Other Conflicts (with other agencies, misinterpretations, suggestions), as considerations for mapping ADs. CLA focused on conflicting surveying procedures: AD Definitions, Surveying Equipment, Surveying Methods, Map Projections, Order of Accuracy, Monuments, Survey Parties, Pre-survey, Survey Proper, and Post-survey procedures. MOA emphasized the land area percentage of conflicting areas, showcasing the impact of misaligned surveying procedures. The findings are summarized through a Land Administration Domain Model (LADM) Conflict, for AD versus AD and Political Boundaries. The products of this study are identification of land conflict factors, survey guidelines recommendations, and contested land area computations. These can serve as references for revising survey manuals, updating AD Sustainable Development and Protection Plans, and making amendments to laws.Keywords: ancestral domain, gis, indigenous people, land policies, participatory mapping, surveying, survey procedures
Procedia PDF Downloads 931035 Coupling of Microfluidic Droplet Systems with ESI-MS Detection for Reaction Optimization
Authors: Julia R. Beulig, Stefan Ohla, Detlev Belder
Abstract:
In contrast to off-line analytical methods, lab-on-a-chip technology delivers direct information about the observed reaction. Therefore, microfluidic devices make an important scientific contribution, e.g. in the field of synthetic chemistry. Herein, the rapid generation of analytical data can be applied for the optimization of chemical reactions. These microfluidic devices enable a fast change of reaction conditions as well as a resource saving method of operation. In the presented work, we focus on the investigation of multiphase regimes, more specifically on a biphasic microfluidic droplet systems. Here, every single droplet is a reaction container with customized conditions. The biggest challenge is the rapid qualitative and quantitative readout of information as most detection techniques for droplet systems are non-specific, time-consuming or too slow. An exception is the electrospray mass spectrometry (ESI-MS). The combination of a reaction screening platform with a rapid and specific detection method is an important step in droplet-based microfluidics. In this work, we present a novel approach for synthesis optimization on the nanoliter scale with direct ESI-MS detection. The development of a droplet-based microfluidic device, which enables the modification of different parameters while simultaneously monitoring the effect on the reaction within a single run, is shown. By common soft- and photolithographic techniques a polydimethylsiloxane (PDMS) microfluidic chip with different functionalities is developed. As an interface for the MS detection, we use a steel capillary for ESI and improve the spray stability with a Teflon siphon tubing, which is inserted underneath the steel capillary. By optimizing the flow rates, it is possible to screen parameters of various reactions, this is exemplarity shown by a Domino Knoevenagel Hetero-Diels-Alder reaction. Different starting materials, catalyst concentrations and solvent compositions are investigated. Due to the high repetition rate of the droplet production, each set of reaction condition is examined hundreds of times. As a result, of the investigation, we receive possible reagents, the ideal water-methanol ratio of the solvent and the most effective catalyst concentration. The developed system can help to determine important information about the optimal parameters of a reaction within a short time. With this novel tool, we make an important step on the field of combining droplet-based microfluidics with organic reaction screening.Keywords: droplet, mass spectrometry, microfluidics, organic reaction, screening
Procedia PDF Downloads 3011034 Formulation and Invivo Evaluation of Salmeterol Xinafoate Loaded MDI for Asthma Using Response Surface Methodology
Authors: Paresh Patel, Priya Patel, Vaidehi Sorathiya, Navin Sheth
Abstract:
The aim of present work was to fabricate Salmeterol Xinafoate (SX) metered dose inhaler (MDI) for asthma and to evaluate the SX loaded solid lipid nanoparticles (SLNs) for pulmonary delivery. Solid lipid nanoparticles can be used to deliver particles to the lungs via MDI. A modified solvent emulsification diffusion technique was used to prepare Salmeterol Xinafoate loaded solid lipid nanoparticles by using compritol 888 ATO as lipid, tween 80 as surfactant, D-mannitol as cryoprotecting agent and L-leucine was used to improve aerosolization behaviour. Box-Behnken design was applied with 17 runs. 3-D surface response plots and contour plots were drawn and optimized formulation was selected based on minimum particle size and maximum % EE. % yield, in vitro diffusion study, scanning electron microscopy, X-ray diffraction, DSC, FTIR also characterized. Particle size, zeta potential analyzed by Zetatrac particle size analyzer and aerodynamic properties was carried out by cascade impactor. Pre convulsion time was examined for control group, treatment group and compare with marketed group. MDI was evaluated for leakage test, flammability test, spray test and content per puff. By experimental design, particle size and % EE found to be in range between 119-337 nm and 62.04-76.77% by solvent emulsification diffusion technique. Morphologically, particles have spherical shape and uniform distribution. DSC & FTIR study showed that no interaction between drug and excipients. Zeta potential shows good stability of SLNs. % respirable fraction found to be 52.78% indicating reach to the deep part of lung such as alveoli. Animal study showed that fabricated MDI protect the lungs against histamine induced bronchospasm in guinea pigs. MDI showed sphericity of particle in spray pattern, 96.34% content per puff and non-flammable. SLNs prepared by Solvent emulsification diffusion technique provide desirable size for deposition into the alveoli. This delivery platform opens up a wide range of treatment application of pulmonary disease like asthma via solid lipid nanoparticles.Keywords: salmeterol xinafoate, solid lipid nanoparticles, box-behnken design, solvent emulsification diffusion technique, pulmonary delivery
Procedia PDF Downloads 4511033 Linking Soil Spectral Behavior and Moisture Content for Soil Moisture Content Retrieval at Field Scale
Authors: Yonwaba Atyosi, Moses Cho, Abel Ramoelo, Nobuhle Majozi, Cecilia Masemola, Yoliswa Mkhize
Abstract:
Spectroscopy has been widely used to understand the hyperspectral remote sensing of soils. Accurate and efficient measurement of soil moisture is essential for precision agriculture. The aim of this study was to understand the spectral behavior of soil at different soil water content levels and identify the significant spectral bands for soil moisture content retrieval at field-scale. The study consisted of 60 soil samples from a maize farm, divided into four different treatments representing different moisture levels. Spectral signatures were measured for each sample in laboratory under artificial light using an Analytical Spectral Device (ASD) spectrometer, covering a wavelength range from 350 nm to 2500 nm, with a spectral resolution of 1 nm. The results showed that the absorption features at 1450 nm, 1900 nm, and 2200 nm were particularly sensitive to soil moisture content and exhibited strong correlations with the water content levels. Continuum removal was developed in the R programming language to enhance the absorption features of soil moisture and to precisely understand its spectral behavior at different water content levels. Statistical analysis using partial least squares regression (PLSR) models were performed to quantify the correlation between the spectral bands and soil moisture content. This study provides insights into the spectral behavior of soil at different water content levels and identifies the significant spectral bands for soil moisture content retrieval. The findings highlight the potential of spectroscopy for non-destructive and rapid soil moisture measurement, which can be applied to various fields such as precision agriculture, hydrology, and environmental monitoring. However, it is important to note that the spectral behavior of soil can be influenced by various factors such as soil type, texture, and organic matter content, and caution should be taken when applying the results to other soil systems. The results of this study showed a good agreement between measured and predicted values of Soil Moisture Content with high R2 and low root mean square error (RMSE) values. Model validation using independent data was satisfactory for all the studied soil samples. The results has significant implications for developing high-resolution and precise field-scale soil moisture retrieval models. These models can be used to understand the spatial and temporal variation of soil moisture content in agricultural fields, which is essential for managing irrigation and optimizing crop yield.Keywords: soil moisture content retrieval, precision agriculture, continuum removal, remote sensing, machine learning, spectroscopy
Procedia PDF Downloads 991032 Remote Criminal Proceedings as Implication to Rethink the Principles of Criminal Procedure
Authors: Inga Žukovaitė
Abstract:
This paper aims to present postdoc research on remote criminal proceedings in court. In this period, when most countries have introduced the possibility of remote criminal proceedings in their procedural laws, it is not only possible to identify the weaknesses and strengths of the legal regulation but also assess the effectiveness of the instrument used and to develop an approach to the process. The example of some countries (for example, Italy) shows, on the one hand, that criminal procedure, based on orality and immediacy, does not lend itself to easy modifications that pose even a slight threat of devaluation of these principles in a society with well-established traditions of this procedure. On the other hand, such strong opposition and criticism make us ask whether we are facing the possibility of rethinking the traditional ways to understand the safeguards in order to preserve their essence without devaluing their traditional package but looking for new components to replace or compensate for the so-called “loss” of safeguards. The reflection on technological progress in the field of criminal procedural law indicates the need to rethink, on the basis of fundamental procedural principles, the safeguards that can replace or compensate for those that are in crisis as a result of the intervention of technological progress. Discussions in academic doctrine on the impact of technological interventions on the proceedings as such or on the limits of such interventions refer to the principles of criminal procedure as to a point of reference. In the context of the inferiority of technology, scholarly debate still addresses the issue of whether the court will not gradually become a mere site for the exercise of penal power with the resultant consequences – the deformation of the procedure itself as a physical ritual. In this context, this work seeks to illustrate the relationship between remote criminal proceedings in court and the principle of immediacy, the concept of which is based on the application of different models of criminal procedure (inquisitorial and adversarial), the aim is to assess the challenges posed for legal regulation by the interaction of technological progress with the principles of criminal procedure. The main hypothesis to be tested is that the adoption of remote proceedings is directly linked to the prevailing model of criminal procedure, arguing that the more principles of the inquisitorial model are applied to the criminal process, the more remote criminal trial is acceptable, and conversely, the more the criminal process is based on an adversarial model, more the remote criminal process is seen as incompatible with the principle of immediacy. In order to achieve this goal, the following tasks are set: to identify whether there is a difference in assessing remote proceedings with the immediacy principle between the adversarial model and the inquisitorial model, to analyse the main aspects of the regulation of remote criminal proceedings based on the examples of different countries (for example Lithuania, Italy, etc.).Keywords: remote criminal proceedings, principle of orality, principle of immediacy, adversarial model inquisitorial model
Procedia PDF Downloads 681031 Tracing Sources of Sediment in an Arid River, Southern Iran
Authors: Hesam Gholami
Abstract:
Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran
Procedia PDF Downloads 741030 Frequency Response of Complex Systems with Localized Nonlinearities
Authors: E. Menga, S. Hernandez
Abstract:
Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber
Procedia PDF Downloads 2661029 Analysing Competitive Advantage of IoT and Data Analytics in Smart City Context
Authors: Petra Hofmann, Dana Koniel, Jussi Luukkanen, Walter Nieminen, Lea Hannola, Ilkka Donoghue
Abstract:
The Covid-19 pandemic forced people to isolate and become physically less connected. The pandemic has not only reshaped people’s behaviours and needs but also accelerated digital transformation (DT). DT of cities has become an imperative with the outlook of converting them into smart cities in the future. Embedding digital infrastructure and smart city initiatives as part of normal design, construction, and operation of cities provides a unique opportunity to improve the connection between people. The Internet of Things (IoT) is an emerging technology and one of the drivers in DT. It has disrupted many industries by introducing different services and business models, and IoT solutions are being applied in multiple fields, including smart cities. As IoT and data are fundamentally linked together, IoT solutions can only create value if the data generated by the IoT devices is analysed properly. Extracting relevant conclusions and actionable insights by using established techniques, data analytics contributes significantly to the growth and success of IoT applications and investments. Companies must grasp DT and be prepared to redesign their offerings and business models to remain competitive in today’s marketplace. As there are many IoT solutions available today, the amount of data is tremendous. The challenge for companies is to understand what solutions to focus on and how to prioritise and which data to differentiate from the competition. This paper explains how IoT and data analytics can impact competitive advantage and how companies should approach IoT and data analytics to translate them into concrete offerings and solutions in the smart city context. The study was carried out as a qualitative, literature-based research. A case study is provided to validate the preservation of company’s competitive advantage through smart city solutions. The results of the research contribution provide insights into the different factors and considerations related to creating competitive advantage through IoT and data analytics deployment in the smart city context. Furthermore, this paper proposes a framework that merges the factors and considerations with examples of offerings and solutions in smart cities. The data collected through IoT devices, and the intelligent use of it, can create competitive advantage to companies operating in smart city business. Companies should take into consideration the five forces of competition that shape industries and pay attention to the technological, organisational, and external contexts which define factors for consideration of competitive advantages in the field of IoT and data analytics. Companies that can utilise these key assets in their businesses will most likely conquer the markets and have a strong foothold in the smart city business.Keywords: data analytics, smart cities, competitive advantage, internet of things
Procedia PDF Downloads 1331028 Affects Associations Analysis in Emergency Situations
Authors: Joanna Grzybowska, Magdalena Igras, Mariusz Ziółko
Abstract:
Association rule learning is an approach for discovering interesting relationships in large databases. The analysis of relations, invisible at first glance, is a source of new knowledge which can be subsequently used for prediction. We used this data mining technique (which is an automatic and objective method) to learn about interesting affects associations in a corpus of emergency phone calls. We also made an attempt to match revealed rules with their possible situational context. The corpus was collected and subjectively annotated by two researchers. Each of 3306 recordings contains information on emotion: (1) type (sadness, weariness, anxiety, surprise, stress, anger, frustration, calm, relief, compassion, contentment, amusement, joy) (2) valence (negative, neutral, or positive) (3) intensity (low, typical, alternating, high). Also, additional information, that is a clue to speaker’s emotional state, was annotated: speech rate (slow, normal, fast), characteristic vocabulary (filled pauses, repeated words) and conversation style (normal, chaotic). Exponentially many rules can be extracted from a set of items (an item is a previously annotated single information). To generate the rules in the form of an implication X → Y (where X and Y are frequent k-itemsets) the Apriori algorithm was used - it avoids performing needless computations. Then, two basic measures (Support and Confidence) and several additional symmetric and asymmetric objective measures (e.g. Laplace, Conviction, Interest Factor, Cosine, correlation coefficient) were calculated for each rule. Each applied interestingness measure revealed different rules - we selected some top rules for each measure. Owing to the specificity of the corpus (emergency situations), most of the strong rules contain only negative emotions. There are though strong rules including neutral or even positive emotions. Three examples of the strongest rules are: {sadness} → {anxiety}; {sadness, weariness, stress, frustration} → {anger}; {compassion} → {sadness}. Association rule learning revealed the strongest configurations of affects (as well as configurations of affects with affect-related information) in our emergency phone calls corpus. The acquired knowledge can be used for prediction to fulfill the emotional profile of a new caller. Furthermore, a rule-related possible context analysis may be a clue to the situation a caller is in.Keywords: data mining, emergency phone calls, emotional profiles, rules
Procedia PDF Downloads 4081027 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission
Authors: Tingwei Shu, Dong Zhou, Chengjun Guo
Abstract:
Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.Keywords: semantic communication, transformer, wavelet transform, data processing
Procedia PDF Downloads 781026 In vitro Effects of Porcine Follicular Fluid Proteins on Cell Culture Growth in Luteal Phase Porcine Oviductal Epithelial Cells
Authors: Mayuva Youngsabanant, Chanikarn Srinark, Supanyika Sengsai, Soratorn Kerdkriangkrai, Nongnuch Gumlungpat, Mayuree Pumipaiboon
Abstract:
The follicular fluid proteins of healthy medium size follicles (4-6 mm in diameters) and large size follicles (7-8 mm in diameter) of large white pig ovaries were collected by using sterile technique. They were used for testing the effect on primary in vitro cell culture growth of porcine oviductal epithelial cells (pOEC). Porcine oviductal epithelial cells of luteal phase was culture in M199 and added with 10% fetal calf serum 2.2 mg/mL, NaHCO₃, 0.25 mM pyruvate, 15 µg/mL and 50 µg/mL, gentamycin sulfate at high humidified atmosphere with 5% CO₂ in 95% air atmosphere at 37°C for 96 h before testing. The optimized concentration of pFF of two follicle sizes (at concentration of 2, 4, 20, 40, 200, 400, 500, and 600 µg proteins) in culture medium was observed for 24 h using MTT assay. Results were analyzed with a one-way ANOVA in SPSS statistic. Moreover, pOEC was also studied in morphological characteristic on long-term culture. The results of long-term study revealed that pOEC showed 70-80 percentage of healthy morphology on epithelial-like character and contained 30 percentage of an elongated shape (fibroblast-like morphology) at 4 weeks of culture time. MTT assay reviewed an increase in the percentage of viability of pOEC in 2 treated of follicular fluid groups. Two treatment concentration groups were higher than control group (p < 0.05) but not in positive control group. Interestingly, at 200 µg protein of 2 treated follicular fluid groups were reached the highest cell viability which is higher than a positive control and it is significantly different form control group (P < 0.05). These cells are developed and had fibroblast elongate shape which is longer than the cells in control group and positive control group. This report implies that pFF of medium follicle size at 200 µg proteins and large follicle size at 200 and 500 µg proteins could be optimized concentration for using as a supplement in culture medium to promote cell growth and development instead of growth hormone from fetal calf serum. It could be applied in cell biotechnology researches. Acknowledgements: The project was funded by a grant from Silpakorn University Research and Development Institute (SURDI) and Faculty of Science, Silpakorn University, Thailand.Keywords: in vitro, porcine follicular fluid protein (pFF), porcine oviductal epithelial cells (pOEC), MTT
Procedia PDF Downloads 1451025 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses
Authors: Neil Bar, Andrew Heweston
Abstract:
Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability
Procedia PDF Downloads 2081024 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning
Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz
Abstract:
Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics
Procedia PDF Downloads 1181023 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust
Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin
Abstract:
The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.Keywords: acoustic impedance, engine exhaust system, FEM model, test stand
Procedia PDF Downloads 591022 Investigation of Clusters of MRSA Cases in a Hospital in Western Kenya
Authors: Lillian Musila, Valerie Oundo, Daniel Erwin, Willie Sang
Abstract:
Staphylococcus aureus infections are a major cause of nosocomial infections in Kenya. Methicillin resistant S. aureus (MRSA) infections are a significant burden to public health and are associated with considerable morbidity and mortality. At a hospital in Western Kenya two clusters of MRSA cases emerged within short periods of time. In this study we explored whether these clusters represented a nosocomial outbreak by characterizing the isolates using phenotypic and molecular assays and examining epidemiological data to identify possible transmission patterns. Specimens from the site of infection of the subjects were collected, cultured and S. aureus isolates identified phenotypically and confirmed by APIStaph™. MRSA were identified by cefoxitin disk screening per CLSI guidelines. MRSA were further characterized based on their antibiotic susceptibility patterns and spa gene typing. Characteristics of cases with MRSA isolates were compared with those with MSSA isolated around the same time period. Two cases of MRSA infection were identified in the two week period between 21 April and 4 May 2015. A further 2 MRSA isolates were identified on the same day on 7 September 2015. The antibiotic resistance patterns of the two MRSA isolates in the 1st cluster of cases were different suggesting that these were distinct isolates. One isolate had spa type t2029 and the other had a novel spa type. The 2 isolates were obtained from urine and an open skin wound. In the 2nd cluster of MRSA isolates, the antibiotic susceptibility patterns were similar but isolates had different spa types: one was t037 and the other a novel spa type different from the novel MRSA spa type in the first cluster. Both cases in the second cluster were admitted into the hospital but one infection was community- and the other hospital-acquired. Only one of the four MRSA cases was classified as an HAI from an infection acquired post-operatively. When compared to other S. aureus strains isolated within the same time period from the same hospital only one spa type t2029 was found in both MRSA and non-MRSA strains. None of the cases infected with MRSA in the two clusters shared any common epidemiological characteristic such as age, sex or known risk factors for MRSA such as prolonged hospitalization or institutionalization. These data suggest that the observed MRSA clusters were multi strain clusters and not an outbreak of a single strain. There was no clear relationship between the isolates by spa type suggesting that no transmission was occurring within the hospital between these cluster cases but rather that the majority of the MRSA strains were circulating in the community. There was high diversity of spa types among the MRSA strains with none of the isolates sharing spa types. Identification of disease clusters in space and time is critical for immediate infection control action and patient management. Spa gene typing is a rapid way of confirming or ruling out MRSA outbreaks so that costly interventions are applied only when necessary.Keywords: cluster, Kenya, MRSA, spa typing
Procedia PDF Downloads 3301021 Optical Assessment of Marginal Sealing Performance around Restorations Using Swept-Source Optical Coherence Tomography
Authors: Rima Zakzouk, Yasushi Shimada, Yasunori Sumi, Junji Tagami
Abstract:
Background and purpose: The resin composite has become the main material for the restorations of caries in recent years due to aesthetic characteristics, especially with the development of the adhesive techniques. The quality of adhesion to tooth structures is depending on an exchange process between inorganic tooth material and synthetic resin and a micromechanical retention promoted by resin infiltration in partially demineralized dentin. Optical coherence tomography (OCT) is a noninvasive diagnostic method for obtaining cross-sectional images that produce high-resolution of the biological tissue at the micron scale. The aim of this study was to evaluate the gap formation at adhesive/tooth interface of two-step self-etch adhesives that are preceded with or without phosphoric acid pre-etching in different regions of teeth using SS-OCT. Materials and methods: Round tapered cavities (2×2 mm) were prepared in cervical part of bovine incisors teeth and divided into 2 groups (n=10): first group self-etch adhesive (Clearfil SE Bond) was applied for SE group and second group treated with acid etching before applying the self-etch adhesive for PA group. Subsequently, both groups were restored with Estelite Flow Quick Flowable Composite Resin and observed under OCT. Following 5000 thermal cycles, the same section was obtained again for each cavity using OCT at 1310-nm wavelength. Scanning was repeated after two months to monitor the gap progress. Then the gap length was measured using image analysis software, and the statistics analysis were done between both groups using SPSS software. After that, the cavities were sectioned and observed under Confocal Laser Scanning Microscope (CLSM) to confirm the result of OCT. Results: Gaps formed at the bottom of the cavity was longer than the gap formed at the margin and dento-enamel junction in both groups. On the other hand, pre-etching treatment led to damage the DEJ regions creating longer gap. After 2 months the results showed almost progress in the gap length significantly at the bottom regions in both groups. In conclusions, phosphoric acid etching treatment did not reduce the gap lrngth in most regions of the cavity. Significance: The bottom region of tooth was more exposed to gap formation than margin and DEJ regions, The DEJ damaged with phosphoric acid treatment.Keywords: optical coherence tomography, self-etch adhesives, bottom, dento enamel junction
Procedia PDF Downloads 2271020 Improving Photocatalytic Efficiency of TiO2 Films Incorporated with Natural Geopolymer for Sunlight-Driven Water Purification
Authors: Satam Alotibi, Haya A. Al-Sunaidi, Almaymunah M. AlRoibah, Zahraa H. Al-Omaran, Mohammed Alyami, Fatehia S. Alhakami, Abdellah Kaiba, Mazen Alshaaer, Talal F. Qahtan
Abstract:
This research study presents a novel approach to harnessing the potential of natural geopolymer in conjunction with TiO₂ nanoparticles (TiO₂ NPs) for the development of highly efficient photocatalytic materials for water decontamination. The study begins with the formulation of a geopolymer paste derived from natural sources, which is subsequently applied as a coating on glass substrates and allowed to air-dry at room temperature. The result is a series of geopolymer-coated glass films, serving as the foundation for further experimentation. To enhance the photocatalytic capabilities of these films, a critical step involves immersing them in a suspension of TiO₂ nanoparticles (TiO₂ NPs) in water for varying durations. This immersion process yields geopolymer-loaded TiO₂ NPs films with varying concentrations, setting the stage for comprehensive characterization and analysis. A range of advanced analytical techniques, including UV-Vis spectroscopy, Fourier-transform infrared spectroscopy (FTIR), Raman spectroscopy, scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), and atomic force microscopy (AFM), were meticulously employed to assess the structural, morphological, and chemical properties of the geopolymer-based TiO₂ films. These analyses provided invaluable insights into the materials' composition and surface characteristics. The culmination of this research effort sees the geopolymer-based TiO₂ films being repurposed as immobilized photocatalytic reactors for water decontamination under natural sunlight irradiation. Remarkably, the results revealed exceptional photocatalytic performance that exceeded the capabilities of conventional TiO₂-based photocatalysts. This breakthrough underscores the significant potential of natural geopolymer as a versatile and highly effective matrix for enhancing the photocatalytic efficiency of TiO₂ nanoparticles in water treatment applications. In summary, this study represents a significant advancement in the quest for sustainable and efficient photocatalytic materials for environmental remediation. By harnessing the synergistic effects of natural geopolymer and TiO₂ nanoparticles, these geopolymer-based films exhibit outstanding promise in addressing water decontamination challenges and contribute to the development of eco-friendly solutions for a cleaner and healthier environment.Keywords: geopolymer, TiO2 nanoparticles, photocatalytic materials, water decontamination, sustainable remediation
Procedia PDF Downloads 67