Search results for: ICT tools
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3977

Search results for: ICT tools

587 Implementing a Structured, yet Flexible Tool for Critical Information Handover

Authors: Racheli Magnezi, Inbal Gazit, Michal Rassin, Joseph Barr, Orna Tal

Abstract:

An effective process for transmitting patient critical information is essential for patient safety and for improving communication among healthcare staff. Previous studies have discussed handover tools such as SBAR (Situation, Background, Assessment, Recommendation) or SOFI (Short Observational Framework for Inspection). Yet, these formats lack flexibility, and require special training. In addition, nurses and physicians have different procedures for handing over information. The objectives of this study were to establish a universal, structured tool for handover, for both physicians and nurses, based on parameters that were defined as ‘important’ and ‘appropriate’ by the medical team, and to implement this tool in various hospital departments, with flexibility for each ward. A questionnaire, based on established procedures and on the literature, was developed to assess attitudes towards the most important information for effective handover between shifts (Cronbach's alpha 0.78). It was distributed to 150 senior physicians and nurses in 62 departments. Among senior medical staff, 12 physicians and 66 nurses responded to the questionnaire (52% response rate). Based on the responses, a handover form suitable for all hospital departments was designed and implemented. Important information for all staff included: Patient demographics (full name and age); Health information (diagnosis or patient complaint, changes in hemodynamic status, new medical treatment or equipment required); and Social Information (suspicion of violence, mental or behavioral changes, and guardianship). Additional information relevant to each unit included treatment provided, laboratory or imaging required, and change in scheduled surgery in surgical departments. ICU required information on background illnesses, Pediatrics required information on diet and food provided and Obstetrics required the number of days after cesarean section. Based on the model described, a flexible tool was developed that enables handover of both common and unique information. In addition, it includes general logistic information that must be transmitted to the next shift, such as planned disruptions in service or operations, staff training, etc. Development of a simple, clear, comprehensive, universal, yet flexible tool designed for all medical staff for transmitting critical information between shifts was challenging. Physicians and nurses found it useful and it was widely implemented. Ongoing research is needed to examine the efficiency of this tool, and whether the enthusiasm that accompanied its initial use is maintained.

Keywords: handover, nurses, hospital, critical information

Procedia PDF Downloads 247
586 Disruptions to Medical Education during COVID-19: Perceptions and Recommendations from Students at the University of the West, Indies, Jamaica

Authors: Charléa M. Smith, Raiden L. Schodowski, Arletty Pinel

Abstract:

Due to the COVID-19 pandemic, the Faculty of Medical Sciences of The University of the West Indies (UWI) Mona in Kingston, Jamaica, had to rapidly migrate to digital and blended learning. Students in the preclinical stage of the program transitioned to full-time online learning, while students in the clinical stage experienced decreased daily patient contact and the implementation of a blend of online lectures and virtual clinical practice. Such sudden changes were coupled with the institutional pressure of the need to introduce a novel approach to education without much time for preparation, as well as additional strain endured by the faculty, who were overwhelmed by serving as frontline workers. During the period July 20 to August 23, 2021, this study surveyed preclinical and clinical students to capture their experiences with these changes and their recommendations for future use of digital modalities of learning to enhance medical education. It was conducted with a fellow student of the 2021 cohort of the MultiPod mentoring program. A questionnaire was developed and distributed digitally via WhatsApp to all medical students of the UWI Mona campus to assess students’ experiences and perceptions of the advantages, challenges, and impact on individual knowledge proficiencies brought about by the transition to predominantly digital learning environments. 108 students replied, 53.7% preclinical and 46.3% clinical. 67.6% of the total were female and 30.6 % were male; 1.8% did not identify themselves by gender. 67.2% of preclinical students preferred blended learning and 60.3% considered that the content presented did not prepare them for clinical work. Only 31% considered that the online classes were interactive and encouraged student participation. 84.5% missed socialization with classmates and friends and 79.3% missed a focused environment for learning. 80% of the clinical students felt that they had not learned all that they expected and only 34% had virtual interaction with patients, mostly by telephone and video calls. Observing direct consultations was considered the most useful, yet this was the least-used modality. 96% of the preclinical students and 100% of the clinical ones supplemented their learning with additional online tools. The main recommendations from the survey are the use of interactive teaching strategies, more discussion time with lecturers, and increased virtual interactions with patients. Universities are returning to face-to-face learning, yet it is unlikely that blended education will disappear. This study demonstrates that students’ perceptions of their experience during mobility restrictions must be taken into consideration in creating more effective, inclusive, and efficient blended learning opportunities.

Keywords: blended learning, digital learning, medical education, student perceptions

Procedia PDF Downloads 166
585 Reading Strategies of Generation X and Y: A Survey on Learners' Skills and Preferences

Authors: Kateriina Rannula, Elle Sõrmus, Siret Piirsalu

Abstract:

Mixed generation classroom is a phenomenon that current higher education establishments are faced with daily trying to meet the needs of modern labor market with its emphasis on lifelong learning and retraining. Representatives of mainly X and Y generations in one classroom acquiring higher education is a challenge to lecturers considering all the characteristics that differ one generation from another. The importance of outlining different strategies and considering the needs of the students lies in the necessity for everyone to acquire the maximum of the provided knowledge as well as to understand each other to study together in one classroom and successfully cooperate in future workplaces. In addition to different generations, there are also learners with different native languages which have an impact on reading and understanding texts in third languages, including possible translation. Current research aims to investigate, describe and compare reading strategies among the representatives of generation X and Y. Hypotheses were formulated - representatives of generation X and Y use different reading strategies which is also different among first and third year students of the before mentioned generations. Current study is an empirical, qualitative study. To achieve the aim of the research, relevant literature was analyzed and a semi-structured questionnaire conducted among the first and third year students of Tallinn Health Care College. Questionnaire consisted of 25 statements on the text reading strategies, 3 multiple choice questions on preferences considering the design and medium of the text, and three open questions on the translation process when working with a text in student’s third language. The results of the questionnaire were categorized, analyzed and compared. Both, generation X and Y described their reading strategies to be 'scanning' and 'surfing'. Compared to generation X, first year generation Y learners valued interactivity and nonlinear texts. Students frequently used strategies of skimming, scanning, translating and highlighting together with relevant-thinking and assistance-seeking. Meanwhile, the third-year generation Y students no longer frequently used translating, resourcing and highlighting while Generation X learners still incorporated these strategies. Knowing about different needs of the generations currently inside the classrooms and on the labor market enables us with tools to provide sustainable education and grants the society a work force that is more flexible and able to move between professions. Future research should be conducted in order to investigate the amount of learning and strategy- adoption between generations. As for reading, main suggestions arising from the research are as follows: make a variety of materials available to students; allow them to select what they want to read and try to make those materials visually attractive, relevant, and appropriately challenging for learners considering the differences of generations.

Keywords: generation X, generation Y, learning strategies, reading strategies

Procedia PDF Downloads 180
584 Experimental Investigation on the Effect of Prestress on the Dynamic Mechanical Properties of Conglomerate Based on 3D-SHPB System

Authors: Wei Jun, Liao Hualin, Wang Huajian, Chen Jingkai, Liang Hongjun, Liu Chuanfu

Abstract:

Kuqa Piedmont is rich in oil and gas resources and has great development potential in Tarim Basin, China. However, there is a huge thick gravel layer developed with high content, wide distribution and variation in size of gravel, leading to the condition of strong heterogeneity. So that, the drill string is in a state of severe vibration and the drill bit is worn seriously while drilling, which greatly reduces the rock-breaking efficiency, and there is a complex load state of impact and three-dimensional in-situ stress acting on the rock in the bottom hole. The dynamic mechanical properties and the influencing factors of conglomerate, the main component of gravel layer, are the basis of engineering design and efficient rock breaking method and theoretical research. Limited by the previously experimental technique, there are few works published yet about conglomerate, especially rare in dynamic load. Based on this, a kind of 3D SHPB system, three-dimensional prestress, can be applied to simulate the in-situ stress characteristics, is adopted for the dynamic test of the conglomerate. The results show that the dynamic strength is higher than its static strength obviously, and while the three-dimensional prestress is 0 and the loading strain rate is 81.25~228.42 s-1, the true triaxial equivalent strength is 167.17~199.87 MPa, and the strong growth factor of dynamic and static is 1.61~1.92. And the higher the impact velocity, the greater the loading strain rate, the higher the dynamic strength and the greater the failure strain, which all increase linearly. There is a critical prestress in the impact direction and its vertical direction. In the impact direction, while the prestress is less than the critical one, the dynamic strength and the loading strain rate increase linearly; otherwise, the strength decreases slightly and the strain rate decreases rapidly. In the vertical direction of impact load, the strength increases and the strain rate decreases linearly before the critical prestress, after that, oppositely. The dynamic strength of the conglomerate can be reduced properly by reducing the amplitude of impact load so that the service life of rock-breaking tools can be prolonged while drilling in the stratum rich in gravel. The research has important reference significance for the speed-increasing technology and theoretical research while drilling in gravel layer.

Keywords: huge thick gravel layer, conglomerate, 3D SHPB, dynamic strength, the deformation characteristics, prestress

Procedia PDF Downloads 209
583 Ionic Liquids-Polymer Nanoparticle Systems as Breakthrough Tools to Improve the Leprosy Treatment

Authors: A. Julio, R. Caparica, S. Costa Lima, S. Reis, J. G. Costa, P. Fonte, T. Santos De Almeida

Abstract:

The Mycobacterium leprae causes a chronic and infectious disease called leprosy, which the most common symptoms are peripheral neuropathy and deformation of several parts of the body. The pharmacological treatment of leprosy is a combined therapy with three different drugs, rifampicin, clofazimine, and dapsone. However, clofazimine and dapsone have poor solubility in water and also low bioavailability. Thus, it is crucial to develop strategies to overcome such drawbacks. The use of ionic liquids (ILs) may be a strategy to overcome the low solubility since they have been used as solubility promoters. ILs are salts, liquid below 100 ºC or even at room temperature, that may be placed in water, oils or hydroalcoholic solutions. Another approach may be the encapsulation of drugs into polymeric nanoparticles, which improves their bioavailability. In this study, two different classes of ILs were used, the imidazole- and the choline-based ionic liquids, as solubility enhancers of the poorly soluble antileprotic drugs. Thus, after the solubility studies, it was developed IL-PLGA nanoparticles hybrid systems to deliver such drugs. First of all, the solubility studies of clofazimine and dapsone were performed in water and in water: IL mixtures, at ILs concentrations where cell viability is maintained, at room temperature for 72 hours. For both drugs, it was observed an improvement on the drug solubility and [Cho][Phe] showed to be the best solubility enhancer, especially for clofazimine, where it was observed a 10-fold improvement. Later, it was produced nanoparticles, with a polymeric matrix of poly(lactic-co-glycolic acid) (PLGA) 75:25, by a modified solvent-evaporation W/O/W double emulsion technique in the presence of [Cho][Phe]. Thus, the inner phase was an aqueous solution of 0.2 % (v/v) of the above IL with each drug to its maximum solubility determined on the previous study. After the production, the nanosystem hybrid was physicochemically characterized. The produced nanoparticles had a diameter of around 580 nm and 640 nm, for clofazimine and dapsone, respectively. Regarding the polydispersity index, it was in agreement of the recommended value of this parameter for drug delivery systems (around 0.3). The association efficiency (AE) of the developed hybrid nanosystems demonstrated promising AE values for both drugs, given their low solubility (64.0 ± 4.0 % for clofazimine and 58.6 ± 10.0 % for dapsone), that prospects the capacity of these delivery systems to enhance the bioavailability and loading of clofazimine and dapsone. Overall, the study achievement may signify an upgrading of the patient’s quality of life, since it may mean a change in the therapeutic scheme, not requiring doses of drug so high to obtain a therapeutic effect. The authors would like to thank Fundação para a Ciência e a Tecnologia, Portugal (FCT/MCTES (PIDDAC), UID/DTP/04567/2016-CBIOS/PRUID/BI2/2018).

Keywords: ionic liquids, ionic liquids-PLGA nanoparticles hybrid systems, leprosy treatment, solubility

Procedia PDF Downloads 150
582 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.

Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival

Procedia PDF Downloads 341
581 Neuromyelitis Optica area Postrema Syndrome(NMOSD-APS) in a Fifteen-year-old Girl: A Case Report

Authors: Merilin Ivanova Ivanova, Kalin Dimitrov Atanasov, Stefan Petrov Enchev

Abstract:

Backgroud: Neuromyelitis optica spectrum disorder, also known as Devic’s disease, is a relapsing demyelinating autoimmune inflammatory disorder of the central nervous system associated with anti-aquaporin 4 (AQP4) antibodies that can manifest with devastating secondary neurological deficits. Most commonly affected are the optic nerves and the spinal cord-clinically this is often presented with optic neuritis (loss of vision), transverse myelitis(weakness or paralysis of extremities),lack of bladder and bowel control, numbness. APS is a core clinical entity of NMOSD and adds to the clinical representation the following symptoms: intractable nausea, vomiting and hiccup, it usually occurs isolated at onset, and can lead to a significant delay in the diagnosis. The condition may have features similar to multiple sclerosis (MS) but the episodes are worse in NMO and it is treated differently. It could be relapsing or monophasic. Possible complications are visual field defects and motor impairment, with potential blindness and irreversible motor deficits. In severe cases, myogenic respiratory failure ensues. The incidence of reported cases is approximately 0.3–4.4 per 100,000. Paediatric cases of NMOSD are rare but have been reported occasionally, comprising less than 5% of the reported cases. Objective: The case serves to show the difficulty when it comes to the diagnostic processes regarding a rare autoimmune disease with non- specific symptoms, taking large interval of rimes to reveal as complete clinical manifestation of the aforementioned syndrome, as well as the necessity of multidisciplinary approach in the setting of а general paediatric department in аn emergency hospital. Methods: itpatient's history, clinical presentation, and information from the used diagnostic tools(MRI with contrast of the central nervous system) lead us to the conclusion .This was later on confirmed by the positive results from the anti-aquaporin 4 (AQP4) antibody serology test. Conclusion: APS is a common symptom of NMOSD and is considered a challenge in a differential-diagnostic plan. Gaining an increased awareness of this disease/syndrome, obtaining a detailed patient history, and performing thorough physical examinations are essential if we are to reduce and avoid misdiagnosis.

Keywords: neuromyelitis, devic's disease, hiccup, autoimmune, MRI

Procedia PDF Downloads 39
580 Navigating the Integration of AI in High School Assessment: Strategic Implementation and Ethical Practice

Authors: Loren Clarke, Katie Reed

Abstract:

The integration of artificial intelligence (AI) in high school education assessment offers transformative potential, providing more personalized, timely, and accurate evaluations of student performance. However, the successful adoption of AI-driven assessment systems requires robust change management strategies to navigate the complexities and resistance that often accompany such technological shifts. This presentation explores effective methods for implementing AI in high school assessment, emphasizing the need for strategic planning and stakeholder engagement. Focusing on a case study of a Victorian high school, it will examine the practical steps taken to integrate AI into teaching and learning. This school has developed innovative processes to support academic integrity and foster authentic cogeneration with AI, ensuring that the technology is used ethically and effectively. By creating comprehensive professional development programs for teachers and maintaining transparent communication with students and parents, the school has successfully aligned AI technologies with their existing curricula and assessment frameworks. The session will highlight how AI has enhanced both formative and summative assessments, providing real-time feedback that supports differentiated instruction and fosters a more personalized learning experience. Participants will learn about best practices for managing the integration of AI in high school settings while maintaining a focus on equity and student-centered learning. This presentation aims to equip high school educators with the insights and tools needed to effectively manage the integration of AI in assessment, ultimately improving educational outcomes and preparing students for future success. Methodologies: The research is a case study of a Victorian high school to examine AI integration in assessments, focusing on practical implementation steps, ethical practices, and change management strategies to enhance personalized learning and assessment. Outcomes: This research explores AI integration in high school assessments, focusing on personalized evaluations, ethical use, and change management. A Victorian school case study highlights best practices to enhance assessments and improve student outcomes. Main Contributions: This research contributes by outlining effective AI integration in assessments, showcasing a Victorian school's implementation, and providing best practices for ethical use, change management, and enhancing personalized learning outcomes.

Keywords: artificial intelligence, assessment, curriculum design, teaching and learning, ai in education

Procedia PDF Downloads 21
579 Prospects for the Development of e-Commerce in Georgia

Authors: Nino Damenia

Abstract:

E-commerce opens a new horizon for business development, which is why the presence of e-commerce is a necessary condition for the formation, growth, and development of the country's economy. Worldwide, e-commerce turnover is growing at a high rate every year, as the electronic environment provides great opportunities for product promotion. E-commerce in Georgia is developing at a fast pace, but it is still a relatively young direction in the country's economy. Movement restrictions and other public health measures caused by the COVID-19 pandemic have reduced economic activity in most economic sectors and countries, significantly affecting production, distribution, and consumption. The pandemic has accelerated digital transformation. Digital solutions enable people and businesses to continue part of their economic and social activities remotely. This has also led to the growth of e-commerce. According to the data of the National Statistics Service of Georgia, the share of online trade is higher in cities (27.4%) than in rural areas (9.1%). The COVID-19 pandemic has forced local businesses to expand their digital offerings. The size of the local market increased 3.2 times in 2020 to 138 million GEL. And in 2018-2020, the share of local e-commerce increased from 11% to 23%. In Georgia, the state is actively engaged in the promotion of activities based on information technologies. Many measures have been taken for this purpose, but compared to other countries, this process is slow in Georgia. The purpose of the study is to determine development prospects for the economy of Georgia based on the analysis of electronic commerce. Research was conducted around the issues using Georgian and foreign scientists' articles, works, reports of international organizations, collections of scientific conferences, and scientific electronic databases. The empirical base of the research is the data and annual reports of the National Statistical Service of Georgia, internet resources of world statistical materials, and others. While working on the article, a questionnaire was developed, based on which an electronic survey of certain types of respondents was conducted. The conducted research was related to determining how intensively Georgian citizens use online shopping, including which age category uses electronic commerce, for what purposes, and how satisfied they are. Various theoretical and methodological research tools, as well as analysis, synthesis, comparison, and other types of methods, are used to achieve the set goal in the research process. The research results and recommendations will contribute to the development of e-commerce in Georgia and economic growth based on it.

Keywords: e-commerce, information technology, pandemic, digital transformation

Procedia PDF Downloads 75
578 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries

Authors: Ahmed Elaksher

Abstract:

Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.

Keywords: UAV, photogrammetry, SfM, DEM

Procedia PDF Downloads 294
577 Experiencing an Unknown City: Environmental Features as Pedestrian Wayfinding Clues through the City of Swansea, UK

Authors: Hussah Alotaishan

Abstract:

In today’s globally-driven modern cities diverse groups of new visitors face various challenges when attempting to find their desired location if culture and language are barriers. The most common way-showing tools such as directional and identificational signs are the most problematic and their usefulness can be limited or even non-existent. It is argued new methods should be implemented that could support or replace such conventional literacy and language dependent way-finding aids. It has been concluded in recent research studies that local urban features in complex pedestrian spaces are worthy of further study in order to reveal if they do function as way-showing clues. Some researchers propose a more comprehensive approach to the complex perception of buildings, façade design and surface patterns, while some have been questioning whether we necessarily need directional signs or can other methods deliver the same message but in a clearer manner for a wider range of users. This study aimed to test to what extent do existent environmental and urban features through the city center area of Swansea in the UK facilitate the way-finding process of a first time visitor. The three-hour experiment was set to attempt to find 11 visitor attractions ranging from recreational, historical, educational and religious locations. The challenge was attempting to find as many as possible when no prior geographical knowledge of their whereabouts was established. The only clues were 11 pictures representing each of the locations that had been acquired from the city of Swansea official website. An iPhone and a heart-rate tracker wristwatch were used to record the route was taken and stress levels, and take record photographs of destinations or decision-making points throughout the journey. This paper addresses: current limitations in understanding the ways that the physical environment can be intentionally deployed to facilitate pedestrians while finding their way around, without or with a reduction in language dependent signage; investigates visitor perceptions of their surroundings by indicating what urban elements manifested an impact on the way-finding process. The initial findings support the view that building facades and street features, such as width, could facilitate the decision-making process if strategically employed. However, more importantly, the anticipated features of a specific place construed from a promotional picture can also be misleading and create confusion that may lead to getting lost.

Keywords: pedestrian way-finding, environmental features, urban way-showing, environmental affordance

Procedia PDF Downloads 173
576 Designing of Multi-Epitope Peptide Vaccines for Fasciolosis (Fasciola gigantica) using Immune Epitope and Analysis Resource (IEDB) Server

Authors: Supanan Chansap, Werachon Cheukamud, Pornanan Kueakhai, Narin Changklungmoa

Abstract:

Fasciola species (Fasciola spp.) is caused fasciolosis in ruminants such as cattle, sheep, and buffalo. Fasciola gigantica (F.gigantica) commonly infects tropical regions. Fasciola hepatica (F.hepatica) in temperate regions. Liver fluke infection affects livestock economically, for example, reduced milk and meat production, weight loss, sterile animals. Currently, Triclabendazole is used to treat liver flukes. However, liver flukes have also been found to be resistant to drugs in countries. Therefore, vaccination is an attractive alternative to prevent liver fluke infection. Peptide vaccines are new vaccine technologies that mimic epitope antigens that trigger an immune response. An interesting antigen used in vaccine production is catepsin L, a family of proteins that play an important role in the life of the parasite in the host. This study aims to identify immunogenic regions of protein and construct a multi-epidetope vaccine using an immunoinformatic tool. Fasciola gigantica Cathepsin L1 (FgCatL1), Fasciola gigantica Cathepsin L1G (FgCatL1G), and Fasciola gigantica Cathepsin L1H (FgCatL1H) were predicted B-cell and Helper T lymphocytes (HTL) by Immune Epitope and Analysis Resource (IEDB) servers. Both B-cell and HTL epitopes aligned with cathepsin L of the host and Fasciola hepatica (F. hepatica). Epitope groups were selected from non-conserved regions and overlapping sequences with F. hepatica. All overlapping epitopes were linked with the GPGPG and KK linker. GPGPG linker was linked between B-cell epitope. KK linker was linked between HTL epitope and B-cell and HTL epitope. The antigenic scores of multi-epitope peptide vaccine was 0.7824. multi-epitope peptide vaccine was non-allergen, non-toxic, and good soluble. Multi-epitope peptide vaccine was predicted tertiary structure and refinement model by I-Tasser and GalaxyRefine server, respectively. The result of refine structure model was good quality that was generated by Ramachandran plot analysis. Discontinuous and linear B-cell epitopes were predicted by ElliPro server. Multi-epitope peptide vaccine model was two and seven of discontinuous and linear B-cell epitopes, respectively. Furthermore, multi-epitope peptide vaccine was docked with Toll-like receptor 2 (TLR-2). The lowest energy ranged from -901.3 kJ/mol. In summary, multi-epitope peptide vaccine was antigenicity and probably immune response. Therefore, multi-epitope peptide vaccine could be used to prevent F. gigantica infections in the future.

Keywords: fasciola gigantica, Immunoinformatic tools, multi-epitope, Vaccine

Procedia PDF Downloads 78
575 Governance in the Age of Artificial intelligence and E- Government

Authors: Mernoosh Abouzari, Shahrokh Sahraei

Abstract:

Electronic government is a way for governments to use new technology that provides people with the necessary facilities for proper access to government information and services, improving the quality of services and providing broad opportunities to participate in democratic processes and institutions. That leads to providing the possibility of easy use of information technology in order to distribute government services to the customer without holidays, which increases people's satisfaction and participation in political and economic activities. The expansion of e-government services and its movement towards intelligentization has the ability to re-establish the relationship between the government and citizens and the elements and components of the government. Electronic government is the result of the use of information and communication technology (ICT), which by implementing it at the government level, in terms of the efficiency and effectiveness of government systems and the way of providing services, tremendous commercial changes are created, which brings people's satisfaction at the wide level will follow. The main level of electronic government services has become objectified today with the presence of artificial intelligence systems, which recent advances in artificial intelligence represent a revolution in the use of machines to support predictive decision-making and Classification of data. With the use of deep learning tools, artificial intelligence can mean a significant improvement in the delivery of services to citizens and uplift the work of public service professionals while also inspiring a new generation of technocrats to enter government. This smart revolution may put aside some functions of the government, change its components, and concepts such as governance, policymaking or democracy will change in front of artificial intelligence technology, and the top-down position in governance may face serious changes, and If governments delay in using artificial intelligence, the balance of power will change and private companies will monopolize everything with their pioneering in this field, and the world order will also depend on rich multinational companies and in fact, Algorithmic systems will become the ruling systems of the world. It can be said that currently, the revolution in information technology and biotechnology has been started by engineers, large economic companies, and scientists who are rarely aware of the political complexities of their decisions and certainly do not represent anyone. Therefore, it seems that if liberalism, nationalism, or any other religion wants to organize the world of 2050, it should not only rationalize the concept of artificial intelligence and complex data algorithm but also mix them in a new and meaningful narrative. Therefore, the changes caused by artificial intelligence in the political and economic order will lead to a major change in the way all countries deal with the phenomenon of digital globalization. In this paper, while debating the role and performance of e-government, we will discuss the efficiency and application of artificial intelligence in e-government, and we will consider the developments resulting from it in the new world and the concepts of governance.

Keywords: electronic government, artificial intelligence, information and communication technology., system

Procedia PDF Downloads 94
574 PbLi Activation Due to Corrosion Products in WCLL BB (EU-DEMO) and Its Impact on Reactor Design and Recycling

Authors: Nicole Virgili, Marco Utili

Abstract:

The design of the Breeding Blanket in Tokamak fusion energy systems has to guarantee sufficient availability in addition to its functions, that are, tritium breeding self-sufficiency, power extraction and shielding (the magnets and the VV). All these function in the presence of extremely harsh operating conditions in terms of heat flux and neutron dose as well as chemical environment of the coolant and breeder that challenge structural materials (structural resistance and corrosion resistance). The movement and activation of fluids from the BB to the Ex-vessel components in a fusion power plant have an important radiological consideration because flowing material can carry radioactivity to safety-critical areas. This includes gamma-ray emission from activated fluid and activated corrosion products, and secondary activation resulting from neutron emission, with implication for the safety of maintenance personnel and damage to electrical and electronic equipment. In addition to the PbLi breeder activation, it is important to evaluate the contribution due to the activated corrosion products (ACPs) dissolved in the lead-lithium eutectic alloy, at different concentration levels. Therefore, the purpose of the study project is to evaluate the PbLi activity utilizing the FISPACT II inventory code. Emphasis is given on how the design of the EU-DEMO WCLL, and potential recycling of the breeder material will be impacted by the activation of PbLi and the associated active corrosion products (ACPs). For this scope the following Computational Tools, Data and Geometry have been considered: • Neutron source: EU-DEMO neutron flux < 1014/cm2/s • Neutron flux distribution in equatorial breeding blanket module (BBM) #13 in the WCLL BB outboard central zone, which is the most activated zone, with the aim to introduce a conservative component utilizing MNCP6. • The recommended geometry model: 2017 EU DEMO CAD model. • Blanket Module Material Specifications (Composition) • Activation calculations for different ACP concentration levels in the PbLi breeder, with a given chemistry in stationary equilibrium conditions, using FISPACT II code. Results suggest that there should be a waiting time of about 10 years from the shut-down (SD) to be able to safely manipulate the PbLi for recycling operations with simple shielding requirements. The dose rate is mainly given by the PbLi and the ACP concentration (x1 or x 100) does not shift the result. In conclusion, the results show that there is no impact on PbLi activation due to ACPs levels.

Keywords: activation, corrosion products, recycling, WCLL BB., PbLi

Procedia PDF Downloads 131
573 Accomplishing Mathematical Tasks in Bilingual Primary Classrooms

Authors: Gabriela Steffen

Abstract:

Learning in a bilingual classroom not only implies learning in two languages or in an L2, it also means learning content subjects through the means of bilingual or plurilingual resources, which is of a qualitatively different nature than ‘monolingual’ learning. These resources form elements of a didactics of plurilingualism, aiming not only at the development of a plurilingual competence, but also at drawing on plurilingual resources for nonlinguistic subject learning. Applying a didactics of plurilingualism allows for taking account of the specificities of bilingual content subject learning in bilingual education classrooms. Bilingual education is used here as an umbrella term for different programs, such as bilingual education, immersion, CLIL, bilingual modules in which one or several non-linguistic subjects are taught partly or completely in an L2. This paper aims at discussing first results of a study on pupil group work in bilingual classrooms in several Swiss primary schools. For instance, it analyses two bilingual classes in two primary schools in a French-speaking region of Switzerland that follows a part of their school program through German in addition to French, the language of instruction in this region. More precisely, it analyses videotaped classroom interaction and in situ classroom practices of pupil group work in a mathematics lessons. The ethnographic observation of pupils’ group work and the analysis of their interaction (analytical tools of conversational analysis, discourse analysis and plurilingual interaction) enhance the description of whole-class interaction done in the same (and several other) classes. While the latter are teacher-student interactions, the former are student-student interactions giving more space to and insight into pupils’ talk. This study aims at the description of the linguistic and multimodal resources (in German L2 and/or French L1) pupils mobilize while carrying out a mathematical task. The analysis shows that the accomplishment of the mathematical task takes place in a bilingual mode, whether the whole-class interactions are conducted rather in a bilingual (German L2-French L1) or a monolingual mode in L2 (German). The pupils make plenty of use of German L2 in a setting that lends itself to use French L1 (peer groups with French as a dominant language, in absence of the teacher and a task with a mathematical aim). They switch from French to German and back ‘naturally’, which is regular for bilingual speakers. Their linguistic resources in German L2 are not sufficient to allow them to (inter-)act well enough to accomplish the task entirely in German L2, despite their efforts to do so. However, this does not stop them from carrying out the task in mathematics adequately, which is the main objective, by drawing on the bilingual resources at hand.

Keywords: bilingual content subject learning, bilingual primary education, bilingual pupil group work, bilingual teaching/learning resources, didactics of plurilingualism

Procedia PDF Downloads 162
572 Role of Alternative Dispute Resolution (ADR) in Advancing UN-SDG 16 and Pathways to Justice in Kenya: Opportunities and Challenges

Authors: Thomas Njuguna Kibutu

Abstract:

The ability to access justice is an important facet of securing peaceful, just, and inclusive societies, as recognized by Goal 16 of the 2030 Agenda for Sustainable Development. Goal 16 calls for peace, justice, and strong institutions to promote the rule of law and access to justice at a global level. More specifically, Target 16.3 of the Goal aims to promote the rule of law at the national and international levels and ensure equal access to justice for all. On the other hand, it is now widely recognized that Alternative Dispute Resolution (hereafter, ADR) represents an efficient mechanism for resolving disputes outside the adversarial conventional court system of litigation or prosecution. ADR processes include but are not limited to negotiation, reconciliation, mediation, arbitration, and traditional conflict resolution. ADR has a number of advantages, including being flexible, cost-efficient, time-effective, and confidential, and giving the parties more control over the process and the results, thus promoting restorative justice. The methodology of this paper is a desktop review of books, journal articles, reports and government documents., among others. The paper recognizes that ADR represents a cornerstone of Africa’s, and more specifically, Kenya’s, efforts to promote inclusive, accountable, and effective institutions and achieve the objectives of goal 16. In Kenya, and not unlike many African countries, there has been an outcry over the backlog of cases that are yet to be resolved in the courts and the statistics have shown that the numbers keep on rising. While ADR mechanisms have played a major role in reducing these numbers, access to justice in the country remains a big challenge, especially to the subaltern. There is, therefore, a need to analyze the opportunities and challenges facing the application of ADR mechanisms as tools for accessing justice in Kenya and further discuss various ways in which we can overcome these challenges to make ADR an effective alternative to dispute resolution. The paper argues that by embracing ADR across various sectors and addressing existing shortcomings, Kenya can, over time, realize its vision of a more just and equitable society. This paper discusses the opportunities and challenges of the application of ADR in Kenya with a view to sharing the lessons and challenges with the wider African continent. The paper concludes that ADR mechanisms can provide critical pathways to justice in Kenya and the African continent in general but come with distinct challenges. The paper thus calls for concerted efforts of respective stakeholders to overcome these challenges.

Keywords: mediation, arbitration, negotiation, reconsiliation, Traditional conflict resolution, sustainable development

Procedia PDF Downloads 29
571 Environmental Monitoring by Using Unmanned Aerial Vehicle (UAV) Images and Spatial Data: A Case Study of Mineral Exploitation in Brazilian Federal District, Brazil

Authors: Maria De Albuquerque Bercot, Caio Gustavo Mesquita Angelo, Daniela Maria Moreira Siqueira, Augusto Assucena De Vasconcellos, Rodrigo Studart Correa

Abstract:

Mining is an important socioeconomic activity in Brazil although it negatively impacts the environment. Mineral operations cause irreversible changes in topography, removal of vegetation and topsoil, habitat destruction, displacement of fauna, loss of biodiversity, soil erosion, siltation of watercourses and have potential to enhance climate change. Due to the impacts and its pollution potential, mining activity in Brazil is legally subjected to environmental licensing. Unlicensed mining operations or operations that not abide to the terms of an obtained license are taken as environmental crimes in the country. This work reports a case analyzed in the Forensic Institute of the Brazilian Federal District Civil Police. The case consisted of detecting illegal aspects of sand exploitation from a licensed mine in Federal District, nearby Brasilia city. The fieldwork covered an area of roughly 6 ha, which was surveyed with an unmanned aerial vehicle (UAV) (PHANTOM 3 ADVANCED). The overflight with UAV took about 20 min, with maximum flight height of 100 m. 592 UAV georeferenced images were obtained and processed in a photogrammetric software (AGISOFT PHOTOSCAN 1.1.4), which generated a mosaic of geo-referenced images and a 3D model in less than six working hours. The 3D model was analyzed in a forensic software for accurate modeling and volumetric analysis. (MAPTEK I-SITE FORENSIC 2.2). To ensure the 3D model was a true representation of the mine site, coordinates of ten control points and reference measures were taken during fieldwork and compared to respective spatial data in the model. Finally, these spatial data were used for measuring mining area, excavation depth and volume of exploited sand. Results showed that mine holder had not complied with some terms and conditions stated in the granted license, such as sand exploration beyond authorized extension, depth and volume. Easiness, the accuracy and expedition of procedures used in this case highlight the employment of UAV imagery and computational photogrammetry as efficient tools for outdoor forensic exams, especially on environmental issues.

Keywords: computational photogrammetry, environmental monitoring, mining, UAV

Procedia PDF Downloads 318
570 Development and Application of an Intelligent Masonry Modulation in BIM Tools: Literature Review

Authors: Sara A. Ben Lashihar

Abstract:

The heritage building information modelling (HBIM) of the historical masonry buildings has expanded lately to meet the urgent needs for conservation and structural analysis. The masonry structures are unique features for ancient building architectures worldwide that have special cultural, spiritual, and historical significance. However, there is a research gap regarding the reliability of the HBIM modeling process of these structures. The HBIM modeling process of the masonry structures faces significant challenges due to the inherent complexity and uniqueness of their structural systems. Most of these processes are based on tracing the point clouds and rarely follow documents, archival records, or direct observation. The results of these techniques are highly abstracted models where the accuracy does not exceed LOD 200. The masonry assemblages, especially curved elements such as arches, vaults, and domes, are generally modeled with standard BIM components or in-place models, and the brick textures are graphically input. Hence, future investigation is necessary to establish a methodology to generate automatically parametric masonry components. These components are developed algorithmically according to mathematical and geometric accuracy and the validity of the survey data. The main aim of this paper is to provide a comprehensive review of the state of the art of the existing researches and papers that have been conducted on the HBIM modeling of the masonry structural elements and the latest approaches to achieve parametric models that have both the visual fidelity and high geometric accuracy. The paper reviewed more than 800 articles, proceedings papers, and book chapters focused on "HBIM and Masonry" keywords from 2017 to 2021. The studies were downloaded from well-known, trusted bibliographic databases such as Web of Science, Scopus, Dimensions, and Lens. As a starting point, a scientometric analysis was carried out using VOSViewer software. This software extracts the main keywords in these studies to retrieve the relevant works. It also calculates the strength of the relationships between these keywords. Subsequently, an in-depth qualitative review followed the studies with the highest frequency of occurrence and the strongest links with the topic, according to the VOSViewer's results. The qualitative review focused on the latest approaches and the future suggestions proposed in these researches. The findings of this paper can serve as a valuable reference for researchers, and BIM specialists, to make more accurate and reliable HBIM models for historic masonry buildings.

Keywords: HBIM, masonry, structure, modeling, automatic, approach, parametric

Procedia PDF Downloads 165
569 Musculoskeletal Disorders among Employees of an Assembly Industrial Workshop: Biomechanical Constrain’s Semi-Quantitative Analysis

Authors: Lamia Bouzgarrou, Amira Omrane, Haithem Kalel, Salma Kammoun

Abstract:

Background: During recent decades, mechanical and electrical industrial sector has greatly expanded with a significant employability potential. However, this sector faces the increasing prevalence of musculoskeletal disorders with heavy consequences associated with direct and indirect costs. Objective: The current intervention was motivated by large musculoskeletal upper limbs and back disorders frequency among the operators of an assembly workshop in a leader company specialized in sanitary equipment and water and gas connections. We aimed to identify biomechanical constraints among these operators through activity and biomechanical exposures semi-quantitative analysis based on video recordings and MUSKA-TMS software. Methods: We conducted, open observations and exploratory interviews at first, in order to overall understand work situation. Then, we analyzed operator’s activity through systematic observations and interviews. Finally, we conducted a semi-quantitative biomechanical constraints analysis with MUSKA-TMS software after representative activity period video recording. The assessment of biomechanical constrains was based on different criteria; biomechanical characteristics (work positions), aggravating factor (cold, vibration, stress, etc.) and exposure time (duration and frequency of solicitations, recovery phase); with a synthetic score of risk level variable from 1 to 4 (1: low risk of developing MSD and 4: high risk). Results: Semi-quantitative analysis objective many elementary operations with higher biomechanical constrains like high repetitiveness, insufficient recovery time and constraining angulation of shoulders, wrists and cervical spine. Among these risky elementary operations we sited the assembly of sleeve with the body, the assembly of axis, and the control on testing table of gas valves. Transformation of work situations were recommended, covering both the redevelopment of industrial areas and the integration of new tools and equipment of mechanical handling that reduces operator exposure to vibration. Conclusion: Musculoskeletal disorders are complex and costly disorders. Moreover, an approach centered on the observation of the work can promote the interdisciplinary dialogue and exchange between actors with the objective to maximize the performance of a company and improve the quality of life of operators.

Keywords: musculoskeletal disorders, biomechanical constrains, semi-quantitative analysis, ergonomics

Procedia PDF Downloads 161
568 Challenges for Competency-Based Learning Design in Primary School Mathematics in Mozambique

Authors: Satoshi Kusaka

Abstract:

The term ‘competency’ is attracting considerable scholarly attention worldwide with the advance of globalization in the 21st century and with the arrival of a knowledge-based society. In the current world environment, familiarity with varied disciplines is regarded to be vital for personal success. The idea of a competency-based educational system was mooted by the ‘Definition and Selection of Competencies (DeSeCo)’ project that was conducted by the Organization for Economic Cooperation and Development (OECD). Further, attention to this topic is not limited to developed countries; it can also be observed in developing countries. For instance, the importance of a competency-based curriculum was mentioned in the ‘2013 Harmonized Curriculum Framework for the East African Community’, which recommends key competencies that should be developed in primary schools. The introduction of such curricula and the reviews of programs are actively being executed, primarily in the East African Community but also in neighboring nations. Taking Mozambique as a case in point, the present paper examines the conception of ‘competency’ as a target of frontline education in developing countries. It also aims to discover the manner in which the syllabus, textbooks and lessons, among other things, in primary-level math education are developed and to determine the challenges faced in the process. This study employs the perspective of competency-based education design to analyze how the term ‘competency’ is defined in the primary-level math syllabus, how it is reflected in the textbooks, and how the lessons are actually developed. ‘Practical competency’ is mentioned in the syllabus, and the description of the term lays emphasis on learners' ability to interactively apply socio-cultural and technical tools, which is one of the key competencies that are advocated in OECD's ‘Definition and Selection of Competencies’ project. However, most of the content of the textbooks pertains to ‘basic academic ability’, and in actual classroom practice, teachers often impart lessons straight from the textbooks. It is clear that the aptitude of teachers and their classroom routines are greatly dependent on the cultivation of their own ‘practical competency’ as it is defined in the syllabus. In other words, there is great divergence between the ‘syllabus’, which is the intended curriculum, and the content of the ‘textbooks’. In fact, the material in the textbooks should serve as the bridge between the syllabus, which forms the guideline, and the lessons, which represent the ‘implemented curriculum’. Moreover, the results obtained from this investigation reveal that the problem can only be resolved through the cultivation of ‘practical competency’ in teachers, which is currently not sufficient.

Keywords: competency, curriculum, mathematics education, Mozambique

Procedia PDF Downloads 194
567 Limiting Freedom of Expression to Fight Radicalization: The 'Silencing' of Terrorists Does Not Always Allow Rights to 'Speak Loudly'

Authors: Arianna Vedaschi

Abstract:

This paper addresses the relationship between freedom of expression, national security and radicalization. Is it still possible to talk about a balance between the first two elements? Or, due to the intrusion of the third, is it more appropriate to consider freedom of expression as “permanently disfigured” by securitarian concerns? In this study, both the legislative and the judicial level are taken into account and the comparative method is employed in order to provide the reader with a complete framework of relevant issues and a workable set of solutions. The analysis moves from the finding according to which the tension between free speech and national security has become a major issue in democratic countries, whose very essence is continuously endangered by the ever-changing and multi-faceted threat of international terrorism. In particular, a change in terrorist groups’ recruiting pattern, attracting more and more people by way of a cutting-edge communicative strategy, often employing sophisticated technology as a radicalization tool, has called on law-makers to modify their approach to dangerous speech. While traditional constitutional and criminal law used to punish speech only if it explicitly and directly incited the commission of a criminal action (“cause-effect” model), so-called glorification offences – punishing mere ideological support for terrorism, often on the web – are becoming commonplace in the comparative scenario. Although this is direct, and even somehow understandable, consequence of the impending terrorist menace, this research shows many problematic issues connected to such a preventive approach. First, from a predominantly theoretical point of view, this trend negatively impacts on the already blurred line between permissible and prohibited speech. Second, from a pragmatic point of view, such legislative tools are not always suitable to keep up with ongoing developments of both terrorist groups and their use of technology. In other words, there is a risk that such measures become outdated even before their application. Indeed, it seems hard to still talk about a proper balance: what was previously clearly perceived as a balancing of values (freedom of speech v. public security) has turned, in many cases, into a hierarchy with security at its apex. In light of these findings, this paper concludes that such a complex issue would perhaps be better dealt with through a combination of policies: not only criminalizing ‘terrorist speech,’ which should be relegated to a last resort tool, but acting at an even earlier stage, i.e., trying to prevent dangerous speech itself. This might be done by promoting social cohesion and the inclusion of minorities, so as to reduce the probability of people considering terrorist groups as a “viable option” to deal with the lack of identification within their social contexts.

Keywords: radicalization, free speech, international terrorism, national security

Procedia PDF Downloads 197
566 The Markers -mm and dämmo in Amharic: Developmental Approach

Authors: Hayat Omar

Abstract:

Languages provide speakers with a wide range of linguistic units to organize and deliver information. There are several ways to verbally express the mental representations of events. According to the linguistic tools they have acquired, speakers select the one that brings out the most communicative effect to convey their message. Our study focuses on two markers, -mm and dämmo, in Amharic (Ethiopian Semitic language). Our aim is to examine, from a developmental perspective, how they are used by speakers. We seek to distinguish the communicative and pragmatic functions indicated by means of these markers. To do so, we created a corpus of sixty narrative productions of children from 5-6, 7-8 to 10-12 years old and adult Amharic speakers. The experimental material we used to collect our data is a series of pictures without text 'Frog, Where are you?'. Although -mm and dämmo are each used in specific contexts, they are sometimes analyzed as being interchangeable. The suffix -mm is complex and multifunctional. It marks the end of the negative verbal structure, it is found in the relative structure of the imperfect, it creates new words such as adverbials or pronouns, it also serves to coordinate words, sentences and to mark the link between macro-propositions within a larger textual unit. -mm was analyzed as marker of insistence, topic shift marker, element of concatenation, contrastive focus marker, 'bisyndetic' coordinator. On the other hand, dämmo has limited function and did not attract the attention of many authors. The only approach we could find analyzes it in terms of 'monosyndetic' coordinator. The paralleling of these two elements made it possible to understand their distinctive functions and refine their description. When it comes to marking a referent, the choice of -mm or dämmo is not neutral, depending on whether the tagged argument is newly introduced, maintained, promoted or reintroduced. The presence of these morphemes explains the inter-phrastic link. The information is seized by anaphora or presupposition: -mm goes upstream while dämmo arrows downstream, the latter requires new information. The speaker uses -mm or dämmo according to what he assumes to be known to his interlocutors. The results show that -mm and dämmo, although all the speakers use them both, do not always have the same scope according to the speaker and vary according to the age. dämmo is mainly used to mark a contrastive topic to signal the concomitance of events. It is more commonly used in young children’s narratives (F(3,56) = 3,82, p < .01). Some values of -mm (additive) are acquired very early while others are rather late and increase with age (F(3,56) = 3,2, p < .03). The difficulty is due not only because of its synthetic structure but primarily because it is multi-purpose and requires a memory work. It highlights the constituent on which it operates to clarify how the message should be interpreted.

Keywords: acquisition, cohesion, connection, contrastive topic, contrastive focus, discourse marker, pragmatics

Procedia PDF Downloads 134
565 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials

Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov

Abstract:

Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.

Keywords: reading, commercials, eye movements, EEG, polygraphic indicators

Procedia PDF Downloads 166
564 Effectiveness of Traditional Chinese Medicine in the Treatment of Eczema: A Systematic Review and Meta-Analysis Based on Eczema Area and Severity Index Score

Authors: Oliver Chunho Ma, Tszying Chang

Abstract:

Background: Traditional Chinese Medicine (TCM) has been widely used in the treatment of eczema. However, there is currently a lack of comprehensive research on the overall effectiveness of TCM in treating eczema, particularly using the Eczema Area and Severity Index (EASI) score as an evaluation tool. Meta-analysis can integrate the results of multiple studies to provide more convincing evidence. Objective: To conduct a systematic review and meta-analysis based on the EASI score to evaluate the overall effectiveness of TCM in the treatment of eczema. Specifically, the study will review and analyze published clinical studies that investigate TCM treatments for eczema and use the EASI score as an outcome measure, comparing the differences in improving the severity of eczema between TCM and other treatment modalities, such as conventional Western medicine treatments. Methods: Relevant studies, including randomized controlled trials (RCTs) and non-randomized controlled trials, that involve TCM treatment for eczema and use the EASI score as an outcome measure will be searched in medical literature databases such as PubMed, CNKI, etc. Relevant data will be extracted from the selected studies, including study design, sample size, treatment methods, improvement in EASI score, etc. The methodological quality and risk of bias of the included studies will be assessed using appropriate evaluation tools (such as the Cochrane Handbook). The results of the selected studies will be statistically analyzed, including pooling effect sizes (such as standardized mean differences, relative risks, etc.), subgroup analysis (e.g., different TCM syndromes, different treatment modalities), and sensitivity analysis (e.g., excluding low-quality studies). Based on the results of the statistical analysis and quality assessment, the overall effectiveness of TCM in improving the severity of eczema will be interpreted. Expected outcomes: By integrating the results of multiple studies, we expect to provide more convincing evidence regarding the specific effects of TCM in improving the severity of eczema. Additionally, subgroup analysis and sensitivity analysis can further elucidate whether the effectiveness of TCM treatment is influenced by different factors. Besides, we will compare the results of the meta-analysis with the clinical data from our clinic. For both the clinical data and the meta-analysis results, we will perform descriptive statistics such as means, standard deviations, percentages, etc. and compare the differences between the two using statistical tests such as independent samples t-test or non-parametric tests to assess the statistical differences between them.

Keywords: Eczema, traditional Chinese medicine, EASI, systematic review, meta-analysis

Procedia PDF Downloads 58
563 The Role Played by Awareness and Complexity through the Use of a Logistic Regression Analysis

Authors: Yari Vecchio, Margherita Masi, Jorgelina Di Pasquale

Abstract:

Adoption of Precision Agriculture (PA) is involved in a multidimensional and complex scenario. The process of adopting innovations is complex and social inherently, influenced by other producers, change agents, social norms and organizational pressure. Complexity depends on factors that interact and influence the decision to adopt. Farm and operator characteristics, as well as organizational, informational and agro-ecological context directly affect adoption. This influence has been studied to measure drivers and to clarify 'bottlenecks' of the adoption of agricultural innovation. Making decision process involves a multistage procedure, in which individual passes from first hearing about the technology to final adoption. Awareness is the initial stage and represents the moment in which an individual learns about the existence of the technology. 'Static' concept of adoption has been overcome. Awareness is a precondition to adoption. This condition leads to not encountering some erroneous evaluations, arose from having carried out analysis on a population that is only in part aware of technologies. In support of this, the present study puts forward an empirical analysis among Italian farmers, considering awareness as a prerequisite for adoption. The purpose of the present work is to analyze both factors that affect the probability to adopt and determinants that drive an aware individual to not adopt. Data were collected through a questionnaire submitted in November 2017. A preliminary descriptive analysis has shown that high levels of adoption have been found among younger farmers, better educated, with high intensity of information, with large farm size and high labor-intensive, and whose perception of the complexity of adoption process is lower. The use of a logit model permits to appreciate the weight played by the intensity of labor and complexity perceived by the potential adopter in PA adoption process. All these findings suggest important policy implications: measures dedicated to promoting innovation will need to be more specific for each phase of this adoption process. Specifically, they should increase awareness of PA tools and foster dissemination of information to reduce the degree of perceived complexity of the adoption process. These implications are particularly important in Europe where is pre-announced the reform of Common Agricultural Policy, oriented to innovation. In this context, these implications suggest to the measures supporting innovation to consider the relationship between various organizational and structural dimensions of European agriculture and innovation approaches.

Keywords: adoption, awareness, complexity, precision agriculture

Procedia PDF Downloads 138
562 The Residual Efficacy of Etofenprox WP on Different Surfaces for Malaria Control in the Brazilian Legal Amazon

Authors: Ana Paula S. A. Correa, Allan K. R. Galardo, Luana A. Lima, Talita F. Sobral, Josiane N. Muller, Jessica F. S. Barroso, Nercy V. R. Furtado, Ednaldo C. Rêgo., Jose B. P. Lima

Abstract:

Malaria is a public health problem in the Brazilian Legal Amazon. Among the integrated approaches for anopheline control, the Indoor Residual Spraying (IRS) remains one of the main tools in the basic strategy applied in the Amazonian States, where the National Malaria Control Program currently uses one of the insecticides from the pyrethroid class, the Etofenprox WP. Understanding the residual efficacy of insecticides on different surfaces is essential to determine the spray cycles, in order to maintain a rational use and to avoid product waste. The aim of this study was to evaluate the residual efficacy of Etofenprox - VECTRON ® 20 WP on surfaces of Unplastered Cement (UC) and Unpainted Wood (UW) on panels, in field, and in semi-field evaluation of Brazil’s Amapa State. The evaluation criteria used was the cone bioassay test, following the World Health Organization (WHO) recommended method, using plastic cones and female mosquitos of Anopheles sp. The tests were carried out in laboratory panels, semi-field evaluation in a “test house” built in the Macapa municipality, and in the field in 20 houses, being ten houses per surface type (UC and UW), in an endemic malaria area in Mazagão’s municipality. The residual efficacy was measured from March to September 2017, starting one day after the spraying, repeated monthly for a period of six months. The UW surface presented higher residual efficacy than the UC. In fact, the UW presented a residual efficacy of the insecticide throughout the period of this study with a mortality rate above 80% in the panels (= 95%), in the "test house" (= 86%) and in field houses ( = 87%). On the UC surface it was observed a mortality decreased in all the tests performed, with a mortality rate of 45, 47 and 29% on panels, semi-field and in field, respectively; however, the residual efficacy ≥ 80% only occurred in the first evaluation after the 24-hour spraying bioassay in the "test house". Thus, only the UW surface meets the specifications of the World Health Organization Pesticide Evaluation Scheme (WHOPES) regarding the duration of effective action (three to six months). To sum up, the insecticide residual efficacy presented variability on the different surfaces where it was sprayed. Although the IRS with Etofenprox WP was efficient on UW surfaces, and it can be used in spraying cycles at 4-month intervals, it is important to consider the diversity of houses in the Brazilian Legal Amazon, in order to implement alternatives for vector control, including the evaluation of new products or different formulations types for insecticides.

Keywords: Anopheles, vector control, insecticide, bioassay

Procedia PDF Downloads 165
561 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance

Authors: Clement Yeboah, Eva Laryea

Abstract:

A pretest-posttest within subjects experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant, indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant, indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop an interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers and will continue to be a dynamic and rapidly evolving field for years to come.

Keywords: pretest-posttest within subjects, computer game-based learning, statistics achievement, statistics anxiety

Procedia PDF Downloads 77
560 Informed Urban Design: Minimizing Urban Heat Island Intensity via Stochastic Optimization

Authors: Luis Guilherme Resende Santos, Ido Nevat, Leslie Norford

Abstract:

The Urban Heat Island (UHI) is characterized by increased air temperatures in urban areas compared to undeveloped rural surrounding environments. With urbanization and densification, the intensity of UHI increases, bringing negative impacts on livability, health and economy. In order to reduce those effects, it is required to take into consideration design factors when planning future developments. Given design constraints such as population size and availability of area for development, non-trivial decisions regarding the buildings’ dimensions and their spatial distribution are required. We develop a framework for optimization of urban design in order to jointly minimize UHI intensity and buildings’ energy consumption. First, the design constraints are defined according to spatial and population limits in order to establish realistic boundaries that would be applicable in real life decisions. Second, the tools Urban Weather Generator (UWG) and EnergyPlus are used to generate outputs of UHI intensity and total buildings’ energy consumption, respectively. Those outputs are changed based on a set of variable inputs related to urban morphology aspects, such as building height, urban canyon width and population density. Lastly, an optimization problem is cast where the utility function quantifies the performance of each design candidate (e.g. minimizing a linear combination of UHI and energy consumption), and a set of constraints to be met is set. Solving this optimization problem is difficult, since there is no simple analytic form which represents the UWG and EnergyPlus models. We therefore cannot use any direct optimization techniques, but instead, develop an indirect “black box” optimization algorithm. To this end we develop a solution that is based on stochastic optimization method, known as the Cross Entropy method (CEM). The CEM translates the deterministic optimization problem into an associated stochastic optimization problem which is simple to solve analytically. We illustrate our model on a typical residential area in Singapore. Due to fast growth in population and built area and land availability generated by land reclamation, urban planning decisions are of the most importance for the country. Furthermore, the hot and humid climate in the country raises the concern for the impact of UHI. The problem presented is highly relevant to early urban design stages and the objective of such framework is to guide decision makers and assist them to include and evaluate urban microclimate and energy aspects in the process of urban planning.

Keywords: building energy consumption, stochastic optimization, urban design, urban heat island, urban weather generator

Procedia PDF Downloads 131
559 Leptospira Lipl32-Specific Antibodies: Therapeutic Property, Epitopes Characterization and Molecular Mechanisms of Neutralization

Authors: Santi Maneewatchararangsri, Wanpen Chaicumpa, Patcharin Saengjaruk, Urai Chaisri

Abstract:

Leptospirosis is a globally neglected disease that continues to be a significant public health and veterinary burden, with millions of cases reported each year. Early and accurate differential diagnosis of leptospirosis from other febrile illnesses and the development of a broad spectrum of leptospirosis vaccines are needed. The LipL32 outer membrane lipoprotein is a member of Leptospira adhesive matrices and has been found to exert hemolytic activity to erythrocytes in vitro. Therefore, LipL32 is regarded as a potential target for diagnosis, broad-spectrum leptospirosis vaccines, and for passive immunotherapy. In this study, we established LipL32-specific mouse monoclonal antibodies, mAbLPF1 and mAbLPF2, and their respective mouse- and humanized-engineered single chain variable fragment (ScFv). Their antibodies’ neutralizing activities against Leptospira-mediated hemolysis in vitro, and the therapeutic efficacy of mAbs against heterologous Leptospira infected hamsters were demonstrated. The epitope peptide of mAb LPF1 was mapped to a non-contiguous carboxy-terminal β-turn and amphipathic α-helix of LipL32 structure contributing to phospholipid/host cell adhesion and membrane insertion. We found that the mAbLPF2 epitope was located on the interacting loop of peptide binding groove of the LipL32 molecule responsible for interactions with host constituents. Epitope sequences are highly conserved among Leptospira spp. and are absent from the LipL32 superfamily of other microorganisms. Both epitopes are surface-exposed, readily accessible by mAbs, and immunogenic. However, they are less dominant when revealed by LipL32-specific immunoglobulins from leptospirosis-patient sera and rabbit hyperimmune serum raised by whole Leptospira. Our study also demonstrated an adhesion inhibitory activity of LipL32 protein to host membrane components and cells mediated by mAbs as well as an anti-hemolytic activity of the respective antibodies. The therapeutic antibodies, particularly the humanized-ScFv, have a potential for further development as non-drug therapeutic agent for human leptospirosis, especially in subjects allergic to antibiotics. The epitope peptides recognized by two therapeutic mAbs have potential use as tools for structure-function studies. Finally, protective peptides may be used as a target for epitope-based vaccines for control of leptospirosis.

Keywords: leptospira lipl32-specific antibodies, therapeutic epitopes, epitopes characterization, immunotherapy

Procedia PDF Downloads 297
558 Non-Perturbative Vacuum Polarization Effects in One- and Two-Dimensional Supercritical Dirac-Coulomb System

Authors: Andrey Davydov, Konstantin Sveshnikov, Yulia Voronina

Abstract:

There is now a lot of interest to the non-perturbative QED-effects, caused by diving of discrete levels into the negative continuum in the supercritical static or adiabatically slowly varying Coulomb fields, that are created by the localized extended sources with Z > Z_cr. Such effects have attracted a considerable amount of theoretical and experimental activity, since in 3+1 QED for Z > Z_cr,1 ≈ 170 a non-perturbative reconstruction of the vacuum state is predicted, which should be accompanied by a number of nontrivial effects, including the vacuum positron emission. Similar in essence effects should be expected also in both 2+1 D (planar graphene-based hetero-structures) and 1+1 D (one-dimensional ‘hydrogen ion’). This report is devoted to the study of such essentially non-perturbative vacuum effects for the supercritical Dirac-Coulomb systems in 1+1D and 2+1D, with the main attention drawn to the vacuum polarization energy. Although the most of works considers the vacuum charge density as the main polarization observable, vacuum energy turns out to be not less informative and in many respects complementary to the vacuum density. Moreover, the main non-perturbative effects, which appear in vacuum polarization for supercritical fields due to the levels diving into the lower continuum, show up in the behavior of vacuum energy even more clear, demonstrating explicitly their possible role in the supercritical region. Both in 1+1D and 2+1D, we explore firstly the renormalized vacuum density in the supercritical region using the Wichmann-Kroll method. Thereafter, taking into account the results for the vacuum density, we formulate the renormalization procedure for the vacuum energy. To evaluate the latter explicitly, an original technique, based on a special combination of analytical methods, computer algebra tools and numerical calculations, is applied. It is shown that, for a wide range of the external source parameters (the charge Z and size R), in the supercritical region the renormalized vacuum energy could significantly deviate from the perturbative quadratic growth up to pronouncedly decreasing behavior with jumps by (-2 x mc^2), which occur each time, when the next discrete level dives into the negative continuum. In the considered range of variation of Z and R, the vacuum energy behaves like ~ -Z^2/R in 1+1D and ~ -Z^3/R in 2+1D, exceeding deeply negative values. Such behavior confirms the assumption of the neutral vacuum transmutation into the charged one, and thereby of the spontaneous positron emission, accompanying the emergence of the next vacuum shell due to the total charge conservation. To the end, we also note that the methods, developed for the vacuum energy evaluation in 2+1 D, with minimal complements could be carried over to the three-dimensional case, where the vacuum energy is expected to be ~ -Z^4/R and so could be competitive with the classical electrostatic energy of the Coulomb source.

Keywords: non-perturbative QED-effects, one- and two-dimensional Dirac-Coulomb systems, supercritical fields, vacuum polarization

Procedia PDF Downloads 202