Search results for: and teachers' interaction approaches
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9738

Search results for: and teachers' interaction approaches

888 Promoting 'One Health' Surveillance and Response Approach Implementation Capabilities against Emerging Threats and Epidemics Crisis Impact in African Countries

Authors: Ernest Tambo, Ghislaine Madjou, Jeanne Y. Ngogang, Shenglan Tang, Zhou XiaoNong

Abstract:

Implementing national to community-based 'One Health' surveillance approach for human, animal and environmental consequences mitigation offers great opportunities and value-added in sustainable development and wellbeing. 'One Health' surveillance approach global partnerships, policy commitment and financial investment are much needed in addressing the evolving threats and epidemics crises mitigation in African countries. The paper provides insights onto how China-Africa health development cooperation in promoting “One Health” surveillance approach in response advocacy and mitigation. China-Africa health development initiatives provide new prospects in guiding and moving forward appropriate and evidence-based advocacy and mitigation management approaches and strategies in attaining Universal Health Coverage (UHC) and Sustainable Development Goals (SDGs). Early and continuous quality and timely surveillance data collection and coordinated information sharing practices in malaria and other diseases are demonstrated in Comoros, Zanzibar, Ghana and Cameroon. Improvements of variety of access to contextual sources and network of data sharing platforms are needed in guiding evidence-based and tailored detection and response to unusual hazardous events. Moreover, understanding threats and diseases trends, frontline or point of care response delivery is crucial to promote integrated and sustainable targeted local, national “One Health” surveillance and response approach needs implementation. Importantly, operational guidelines are vital in increasing coherent financing and national workforce capacity development mechanisms. Strengthening participatory partnerships, collaboration and monitoring strategies in achieving global health agenda effectiveness in Africa. At the same enhancing surveillance data information streams reporting and dissemination usefulness in informing policies decisions, health systems programming and financial mobilization and prioritized allocation pre, during and post threats and epidemics crises programs strengths and weaknesses. Thus, capitalizing on “One Health” surveillance and response approach advocacy and mitigation implementation is timely in consolidating Africa Union 2063 agenda and Africa renaissance capabilities and expectations.

Keywords: Africa, one health approach, surveillance, response

Procedia PDF Downloads 421
887 Miniaturization of Germanium Photo-Detectors by Using Micro-Disk Resonator

Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Kim Dowon, Qing Fang, Mingbin Yu, Guoqiang Lo

Abstract:

Several Germanium photodetectors (PD) built on silicon micro-disks are fabricated on the standard Si photonics multiple project wafers (MPW) and demonstrated to exhibit very low dark current, satisfactory operation bandwidth and moderate responsivity. Among them, a vertical p-i-n Ge PD based on a 2.0 µm-radius micro-disk has a dark current of as low as 35 nA, compared to a conventional PD current of 1 µA with an area of 100 µm2. The operation bandwidth is around 15 GHz at a reverse bias of 1V. The responsivity is about 0.6 A/W. Microdisk is a striking planar structure in integrated optics to enhance light-matter interaction and construct various photonics devices. The disk geometries feature in strongly and circularly confining light into an ultra-small volume in the form of whispering gallery modes. A laser may benefit from a microdisk in which a single mode overlaps the gain materials both spatially and spectrally. Compared to microrings, micro-disk removes the inner boundaries to enable even better compactness, which also makes it very suitable for some scenarios that electrical connections are needed. For example, an ultra-low power (≈ fJ) athermal Si modulator has been demonstrated with a bit rate of 25Gbit/s by confining both photons and electrically-driven carriers into a microscale volume.In this work, we study Si-based PDs with Ge selectively grown on a microdisk with the radius of a few microns. The unique feature of using microdisk for Ge photodetector is that mode selection is not important. In the applications of laser or other passive optical components, microdisk must be designed very carefully to excite the fundamental mode in a microdisk in that essentially the microdisk usually supports many higher order modes in the radial directions. However, for detector applications, this is not an issue because the local light absorption is mode insensitive. Light power carried by all modes are expected to be converted into photo-current. Another benefit of using microdisk is that the power circulation inside avoids any introduction of the reflector. A complete simulation model with all involved materials taken into account is established to study the promise of microdisk structures for photodetector by using finite difference time domain (FDTD) method. By viewing from the current preliminary data, the directions to further improve the device performance are also discussed.

Keywords: integrated optical devices, silicon photonics, micro-resonator, photodetectors

Procedia PDF Downloads 407
886 Assessment of Community Perceptions of Mangrove Ecosystem Services and Their Link to SDGs in Vanga, Kenya

Authors: Samson Obiene, Khamati Shilabukha, Geoffrey Muga, James Kairo

Abstract:

Mangroves play a vital role in the achievement of multiple goals of global sustainable development (SDG’s), particularly SDG SDG 14 (life under water). Their management, however, is faced with several shortcomings arising from inadequate knowledge on the perceptions of their ecosystem services, hence a need to map mangrove goods and services within SDGs while interrogating the disaggregated perceptions. This study therefore aimed at exploring the parities and disparities in attitudes and perceptions of mangrove ecosystem services among community members of Vanga and the link of the ecosystem services (ESs) to specific SDG targets. The study was based at the Kenya-Tanzania transboundary area in Vanga; where a carbon-offset project on mangroves is being up scaled. Mixed methods approach employing surveys, focus group discussions (FGDs) and reviews of secondary data were used in the study. A two stage cluster samplings was used to select the study population and the sample size. FGDs were conducted purposively selecting active participants in mangrove related activities with distinct socio-demographic characteristics. Sampled respondents comprised of males and females of different occupations and age groups. Secondary data review was used to select specific SDG targets against which mangrove ecosystem services identified through a value chain analysis were mapped. In Vanga, 20 ecosystem services were identified and categorized under supporting, cultural and aesthetic, provisioning and regulating services. According to the findings of this study, 63.9% (95% ci 56.6-69.3) perceived of the ESs as very important for economic development, 10.3% (95% ci 0-21.3) viewed them as important for environmental and ecological development while 25.8% (95% ci 2.2-32.8) were not sure of any role they play in development. In the social-economic disaggregation, ecosystem service values were found to vary with the level of interaction with the ecosystem which depended on gender and other social-economic classes within the study area. The youths, low income earners, women and those with low education levels were also identified as the primary beneficiaries of mangrove ecosystem services. The study also found that of the 17 SDGs, mangroves have a potential of influencing the achievement 12, including, SDGs 1, 2, 3, 4, 6, 8 10, 12, 13, 14, 15 and 17 either directly or indirectly. Generally therefore, the local community is aware of the critical importance mangroves for enhanced livelihood and ecological services but challenges in sustainability still occur as a result the diverse values and of the services and the contradicting interests of the different actors around the ecosystem. It is therefore important to consider parities in values and perception to avoid a ‘tragedy of the commons’ while striving to enhance sustainability of the Mangrove ecosystem.

Keywords: sustainable development, community values, socio-demographics, Vanga, mangrove ecosystem services

Procedia PDF Downloads 151
885 Water Crisis or Crisis of Water Management: Assessing Water Governance in Iran

Authors: Sedigheh Kalantari

Abstract:

Like many countries in the arid and semi-arid belt, Iran experiences a natural limitation in the availability of water resources. However, rapid socioeconomic development has created a serious water crisis in a nation that was once one of the world’s pioneers in sustainable water management, due to the Persians’ contribution to hydraulic engineering inventions – the Qanat – throughout history. The exogenous issues like the changing climate, frequent droughts, and international sanctions are only crisis catalyzers, not the main cause of the water crisis; and a resilient water management system is expected to be capable of coping with these periodic external pressures. The current dramatic water security issues in Iran are rooted in managerial, political, and institutional challenges rather than engineering and technical issues, and the country is suffering from challenges in water governance. The country, instead of rigorous water conservation efforts, is still focused on supply-driven approach, technology and centralized methods, and structural solutions that aim to increase water supply; while the effectiveness of water governance and management has often left unused. To solve these issues, it is necessary to assess the present situation and its evolution over time. In this respect, establishing water governance assessment mechanisms will be a significant aspect of this paper. The research framework, however, is a conceptual framework to assess governance performance of Iran to critically diagnose problematic issues and areas, as well as proffer empirically based solutions and determine the best possible steps towards transformational processes. This concept aims to measure the adequacy of current solutions and strategies designed to ameliorate these problems and then develop and prescribe adequate futuristic solutions. Thus, the analytical framework developed in this paper seeks to provide insights on key factors influencing water governance in Iranian cities, institutional frameworks to manage water across scales and authorities, multi-level management gaps and policy responses, through an evidence-based approach and good practices to drive reform toward sustainability and water resource conservation. The findings of this paper show that the current structure of the water governance system in Iran, coupled with the lack of a comprehensive understanding of the root causes of the problem, leaves minimal hope for developing sustainable solutions to Iran’s increasing water crisis. In order to follow sustainable development approaches, Iran needs to replace symptom management with problem prevention.

Keywords: governance, Iran, sustainable development, water management, water resources

Procedia PDF Downloads 26
884 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries

Authors: Ahmed Elaksher

Abstract:

Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.

Keywords: UAV, photogrammetry, SfM, DEM

Procedia PDF Downloads 295
883 Application of Micro-Tunneling Technique to Rectify Tilted Structures Constructed on Cohesive Soil

Authors: Yasser R. Tawfic, Mohamed A. Eid

Abstract:

Foundation differential settlement and supported structure tilting is an occasionally occurred engineering problem. This may be caused by overloading, changes in ground soil properties or unsupported nearby excavations. Engineering thinking points directly toward the logic solution for such problem by uplifting the settled side. This can be achieved with deep foundation elements such as micro-piles and macro-piles™, jacked piers and helical piers, jet grouted soil-crete columns, compaction grout columns, cement grouting or with chemical grouting, or traditional pit underpinning with concrete and mortar. Although, some of these techniques offer economic, fast and low noise solutions, many of them are quite the contrary. For tilted structures, with limited inclination, it may be much easier to cause a balancing settlement on the less-settlement side which shall be done carefully in a proper rate. This principal has been applied in Leaning Tower of Pisa stabilization with soil extraction from the ground surface. In this research, the authors attempt to introduce a new solution with a different point of view. So, micro-tunneling technique is presented in here as an intended ground deformation cause. In general, micro-tunneling is expected to induce limited ground deformations. Thus, the researchers propose to apply the technique to form small size ground unsupported holes to produce the target deformations. This shall be done in four phases: •Application of one or more micro-tunnels, regarding the existing differential settlement value, under the raised side of the tilted structure. •For each individual tunnel, the lining shall be pulled out from both sides (from jacking and receiving shafts) in slow rate. •If required, according to calculations and site records, an additional surface load can be applied on the raised foundation side. •Finally, a strengthening soil grouting shall be applied for stabilization after adjustment. A finite element based numerical model is presented to simulate the proposed construction phases for different tunneling positions and tunnels group. For each case, the surface settlements are calculated and induced plasticity points are checked. These results show the impact of the suggested procedure on the tilted structure and its feasibility. Comparing results also show the importance of the position selection and tunnels group gradual effect. Thus, a new engineering solution is presented to one of the structural and geotechnical engineering challenges.

Keywords: differential settlement, micro-tunneling, soil-structure interaction, tilted structures

Procedia PDF Downloads 208
882 From Battles to Balance and Back: Document Analysis of EU Copyright in the Digital Era

Authors: Anette Alén

Abstract:

Intellectual property (IP) regimes have traditionally been designed to integrate various conflicting elements stemming from private entitlement and the public good. In IP laws and regulations, this design takes the form of specific uses of protected subject-matter without the right-holder’s consent, or exhaustion of exclusive rights upon market release, and the like. More recently, the pursuit of ‘balance’ has gained ground in the conceptualization of these conflicting elements both in terms of IP law and related policy. This can be seen, for example, in European Union (EU) copyright regime, where ‘balance’ has become a key element in argumentation, backed up by fundamental rights reasoning. This development also entails an ever-expanding dialogue between the IP regime and the constitutional safeguards for property, free speech, and privacy, among others. This study analyses the concept of ‘balance’ in EU copyright law: the research task is to examine the contents of the concept of ‘balance’ and the way it is operationalized and pursued, thereby producing new knowledge on the role and manifestations of ‘balance’ in recent copyright case law and regulatory instruments in the EU. The study discusses two particular pieces of legislation, the EU Digital Single Market (DSM) Copyright Directive (EU) 2019/790 and the finalized EU Artificial Intelligence (AI) Act, including some of the key preparatory materials, as well as EU Court of Justice (CJEU) case law pertaining to copyright in the digital era. The material is examined by means of document analysis, mapping the ways ‘balance’ is approached and conceptualized in the documents. Similarly, the interaction of fundamental rights as part of the balancing act is also analyzed. Doctrinal study of law is also employed in the analysis of legal sources. This study suggests that the pursuit of balance is, for its part, conducive to new battles, largely due to the advancement of digitalization and more recent developments in artificial intelligence. Indeed, the ‘balancing act’ rather presents itself as a way to bypass or even solidify some of the conflicting interests in a complex global digital economy. Indeed, such a conceptualization, especially when accompanied by non-critical or strategically driven fundamental rights argumentation, runs counter to the genuine acknowledgment of new types of conflicting interests in the copyright regime. Therefore, a more radical approach, including critical analysis of the normative basis and fundamental rights implications of the concept of ‘balance’, is required to readjust copyright law and regulations for the digital era. Notwithstanding the focus on executing the study in the context of the EU copyright regime, the results bear wider significance for the digital economy, especially due to the platform liability regime in the DSM Directive and with the AI Act including objectives of a ‘level playing field’ whereby compliance with EU copyright rules seems to be expected among system providers.

Keywords: balance, copyright, fundamental rights, platform liability, artificial intelligence

Procedia PDF Downloads 31
881 The Challenge of Assessing Social AI Threats

Authors: Kitty Kioskli, Theofanis Fotis, Nineta Polemi

Abstract:

The European Union (EU) directive Artificial Intelligence (AI) Act in Article 9 requires that risk management of AI systems includes both technical and human oversight, while according to NIST_AI_RFM (Appendix C) and ENISA AI Framework recommendations, claim that further research is needed to understand the current limitations of social threats and human-AI interaction. AI threats within social contexts significantly affect the security and trustworthiness of the AI systems; they are interrelated and trigger technical threats as well. For example, lack of explainability (e.g. the complexity of models can be challenging for stakeholders to grasp) leads to misunderstandings, biases, and erroneous decisions. Which in turn impact the privacy, security, accountability of the AI systems. Based on the NIST four fundamental criteria for explainability it can also classify the explainability threats into four (4) sub-categories: a) Lack of supporting evidence: AI systems must provide supporting evidence or reasons for all their outputs. b) Lack of Understandability: Explanations offered by systems should be comprehensible to individual users. c) Lack of Accuracy: The provided explanation should accurately represent the system's process of generating outputs. d) Out of scope: The system should only function within its designated conditions or when it possesses sufficient confidence in its outputs. Biases may also stem from historical data reflecting undesired behaviors. When present in the data, biases can permeate the models trained on them, thereby influencing the security and trustworthiness of the of AI systems. Social related AI threats are recognized by various initiatives (e.g., EU Ethics Guidelines for Trustworthy AI), standards (e.g. ISO/IEC TR 24368:2022 on AI ethical concerns, ISO/IEC AWI 42105 on guidance for human oversight of AI systems) and EU legislation (e.g. the General Data Protection Regulation 2016/679, the NIS 2 Directive 2022/2555, the Directive on the Resilience of Critical Entities 2022/2557, the EU AI Act, the Cyber Resilience Act). Measuring social threats, estimating the risks to AI systems associated to these threats and mitigating them is a research challenge. In this paper it will present the efforts of two European Commission Projects (FAITH and THEMIS) from the HorizonEurope programme that analyse the social threats by building cyber-social exercises in order to study human behaviour, traits, cognitive ability, personality, attitudes, interests, and other socio-technical profile characteristics. The research in these projects also include the development of measurements and scales (psychometrics) for human-related vulnerabilities that can be used in estimating more realistically the vulnerability severity, enhancing the CVSS4.0 measurement.

Keywords: social threats, artificial Intelligence, mitigation, social experiment

Procedia PDF Downloads 65
880 Investigation of Rehabilitation Effects on Fire Damaged High Strength Concrete Beams

Authors: Eun Mi Ryu, Ah Young An, Ji Yeon Kang, Yeong Soo Shin, Hee Sun Kim

Abstract:

As the number of fire incidents has been increased, fire incidents significantly damage economy and human lives. Especially when high strength reinforced concrete is exposed to high temperature due to a fire, deterioration occurs such as loss in strength and elastic modulus, cracking, and spalling of the concrete. Therefore, it is important to understand risk of structural safety in building structures by studying structural behaviors and rehabilitation of fire damaged high strength concrete structures. This paper aims at investigating rehabilitation effect on fire damaged high strength concrete beams using experimental and analytical methods. In the experiments, flexural specimens with high strength concrete are exposed to high temperatures according to ISO 834 standard time temperature curve. After heated, the fire damaged reinforced concrete (RC) beams having different cover thicknesses and fire exposure time periods are rehabilitated by removing damaged part of cover thickness and filling polymeric mortar into the removed part. From four-point loading test, results show that maximum loads of the rehabilitated RC beams are 1.8~20.9% higher than those of the non-fire damaged RC beam. On the other hand, ductility ratios of the rehabilitated RC beams are decreased than that of the non-fire damaged RC beam. In addition, structural analyses are performed using ABAQUS 6.10-3 with same conditions as experiments to provide accurate predictions on structural and mechanical behaviors of rehabilitated RC beams. For the rehabilitated RC beam models, integrated temperature–structural analyses are performed in advance to obtain geometries of the fire damaged RC beams. After spalled and damaged parts are removed, rehabilitated part is added to the damaged model with material properties of polymeric mortar. Three dimensional continuum brick elements are used for both temperature and structural analyses. The same loading and boundary conditions as experiments are implemented to the rehabilitated beam models and nonlinear geometrical analyses are performed. Structural analytical results show good rehabilitation effects, when the result predicted from the rehabilitated models are compared to structural behaviors of the non-damaged RC beams. In this study, fire damaged high strength concrete beams are rehabilitated using polymeric mortar. From four point loading tests, it is found that such rehabilitation is able to make the structural performance of fire damaged beams similar to non-damaged RC beams. The predictions from the finite element models show good agreements with the experimental results and the modeling approaches can be used to investigate applicability of various rehabilitation methods for further study.

Keywords: fire, high strength concrete, rehabilitation, reinforced concrete beam

Procedia PDF Downloads 445
879 Understanding the Dynamics of Human-Snake Negative Interactions: A Study of Indigenous Perceptions in Tamil Nadu, Southern India

Authors: Ramesh Chinnasamy, Srishti Semalty, Vishnu S. Nair, Thirumurugan Vedagiri, Mahesh Ganeshan, Gautam Talukdar, Karthy Sivapushanam, Abhijit Das

Abstract:

Snakes form an integral component of ecological systems. Human population explosion and associated acceleration of habitat destruction and degradation, has led to a rapid increase in human-snake encounters. The study aims at understanding the level of awareness, knowledge, and attitude of the people towards human-snake negative interaction and role of awareness programmes in the Moyar river valley, Tamil Nadu. The study area is part of the Mudumalai and the Sathyamangalam Tiger Reserves, which are significant wildlife corridors between the Western Ghats and the Eastern Ghats in the Nilgiri Biosphere Reserve. The data was collected using questionnaire covering 644 respondents spread across 18 villages between 2018 and 2019. The study revealed that 86.5% of respondents had strong negative perceptions towards snakes which were propelled by fear, superstitions, and threat of snakebite which was common and did not vary among different villages (F=4.48; p = <0.05) and age groups (X2 = 1.946; p = 0.962). Cobra 27.8% (n = 294) and rat snake 21.3% (n = 225) were the most sighted species and most snake encounter occurred during the monsoon season i.e., July 35.6 (n = 218), June 19.1% (n = 117) and August 18.4% (n = 113). At least 1 out of 5 respondents was reportedly bitten by snakes during their lifetime. The most common species of snakes that were the cause of snakebite were Saw scaled viper (32.6%, n = 42) followed by Cobra 17.1% (n = 22). About 21.3% (n = 137) people reported livestock loss due to pythons and other snakes 21.3% (n = 137). Most people, preferred medical treatment for snakebite (87.3%), whereas 12.7%, still believed in traditional methods. The majority (82.3%) used precautionary measure by keeping traditional items such as garlic, kerosene, and snake plant to avoid snakes. About 30% of the respondents expressed need for technical and monetary support from the forest department that could aid in reducing the human-snake conflict. It is concluded that the general perception in the study area is driven by fear and negative attitude towards snakes. Though snakes such as Cobra were widely worshiped in the region, there are still widespread myths and misconceptions that have led to the irrational killing of snakes. Awareness and innovative education programs rooted in the local context and language should be integrated at the village level, to minimize risk and the associated threat of snakebite among the people. Results from this study shall help policy makers to devise appropriate conservation measures to reduce human-snake conflicts in India.

Keywords: Envenomation, Health-Education, Human-Wildlife Conflict, Neglected Tropical Disease, Snakebite Mitigation, Traditional Practitioners

Procedia PDF Downloads 227
878 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 128
877 Technology, Ethics and Experience: Understanding Interactions as Ethical Practice

Authors: Joan Casas-Roma

Abstract:

Technology has become one of the main channels through which people engage in most of their everyday activities; from working to learning, or even when socializing, technology often acts as both an enabler and a mediator of such activities. Moreover, the affordances and interactions created by those technological tools determine the way in which the users interact with one another, as well as how they relate to the relevant environment, thus favoring certain kinds of actions and behaviors while discouraging others. In this regard, virtue ethics theories place a strong focus on a person's daily practice (understood as their decisions, actions, and behaviors) as the means to develop and enhance their habits and ethical competences --such as their awareness and sensitivity towards certain ethically-desirable principles. Under this understanding of ethics, this set of technologically-enabled affordances and interactions can be seen as the possibility space where the daily practice of their users takes place in a wide plethora of contexts and situations. At this point, the following question pops into mind: could these affordances and interactions be shaped in a way that would promote behaviors and habits basedonethically-desirable principles into their users? In the field of game design, the MDA framework (which stands for Mechanics, Dynamics, Aesthetics) explores how the interactions enabled within the possibility space of a game can lead to creating certain experiences and provoking specific reactions to the players. In this sense, these interactions can be shaped in ways thatcreate experiences to raise the players' awareness and sensitivity towards certain topics or principles. This research brings together the notions of technological affordances, the notions of practice and practical wisdom from virtue ethics, and the MDA framework from game design in order to explore how the possibility space created by technological interactions can be shaped in ways that enable and promote actions and behaviors supporting certain ethically-desirable principles. When shaped accordingly, interactions supporting certain ethically-desirable principlescould allow their users to carry out the kind of practice that, according to virtue ethics theories, provides the grounds to develop and enhance their awareness, sensitivity, and ethical reasoning capabilities. Moreover, and because ethical practice can happen collaterally in almost every context, decision, and action, this additional layer could potentially be applied in a wide variety of technological tools, contexts, and functionalities. This work explores the theoretical background, as well as the initial considerations and steps that would be needed in order to harness the potential ethically-desirable benefits that technology can bring, once it is understood as the space where most of their users' daily practice takes place.

Keywords: ethics, design methodology, human-computer interaction, philosophy of technology

Procedia PDF Downloads 158
876 Temperature Dependence of Photoluminescence Intensity of Europium Dinuclear Complex

Authors: Kwedi L. M. Nsah, Hisao Uchiki

Abstract:

Quantum computation is a new and exciting field making use of quantum mechanical phenomena. In classical computers, information is represented as bits, with values either 0 or 1, but a quantum computer uses quantum bits in an arbitrary superposition of 0 and 1, enabling it to reach beyond the limits predicted by classical information theory. lanthanide ion quantum computer is an organic crystal, having a lanthanide ion. Europium is a favored lanthanide, since it exhibits nuclear spin coherence times, and Eu(III) is photo-stable and has two stable isotopes. In a europium organic crystal, the key factor is the mutual dipole-dipole interaction between two europium atoms. Crystals of the complex were formed by making a 2 :1 reaction of Eu(fod)3 and bpm. The transparent white crystals formed showed brilliant red luminescence with a 405 nm laser. The photoluminescence spectroscopy was observed both at room and cryogenic temperatures (300-14 K). The luminescence spectrum of [Eu(fod)3(μ-bpm) Eu(fod)3] showed characteristic of Eu(III) emission transitions in the range 570–630 nm, due to the deactivation of 5D0 emissive state to 7Fj. For the application of dinuclear Eu3+ complex to q-bit device, attention was focused on 5D0 -7F0 transition, around 580 nm. The presence of 5D0 -7F0 transition at room temperature revealed that at least one europium symmetry had no inversion center. Since the line was unsplit by the crystal field effect, any multiplicity observed was due to a multiplicity of Eu3+ sites. For q-bit element, more narrow line width of 5D0 → 7F0 PL band in Eu3+ ion was preferable. Cryogenic temperatures (300 K – 14 K) was applicable to reduce inhomogeneous broadening and distinguish between ions. A CCD image sensor was used for low temperature Photoluminescence measurement, and a far better resolved luminescent spectrum was gotten by cooling the complex at 14 K. A red shift by 15 cm-1 in the 5D0 - 7F0 peak position was observed upon cooling, the line shifted towards lower wavenumber. An emission spectrum at the 5D0 - 7F0 transition region was obtained to verify the line width. At this temperature, a peak with magnitude three times that at room temperature was observed. The temperature change of the 5D0 state of Eu(fod)3(μ-bpm)Eu(fod)3 showed a strong dependence in the vicinity of 60 K to 100 K. Thermal quenching was observed at higher temperatures than 100 K, at which point it began to decrease slowly with increasing temperature. The temperature quenching effect of Eu3+ with increase temperature was caused by energy migration. 100 K was the appropriate temperature for the observation of the 5D0 - 7F0 emission peak. Europium dinuclear complex bridged by bpm was successfully prepared and monitored at cryogenic temperatures. At 100 K the Eu3+-dope complex has a good thermal stability and this temperature is appropriate for the observation of the 5D0 - 7F0 emission peak. Sintering the sample above 600o C could also be a method to consider but the Eu3+ ion can be reduced to Eu2+, reasons why cryogenic temperature measurement is preferably over other methods.

Keywords: Eu(fod)₃, europium dinuclear complex, europium ion, quantum bit, quantum computer, 2, 2-bipyrimidine

Procedia PDF Downloads 180
875 Bioincision of Gmelina Arborea Roxb. Heartwood with Inonotus Dryophilus (Berk.) Murr. for Improved Chemical Uptake and Penetration

Authors: A. O. Adenaiya, S. F. Curling, O. Y. Ogunsanwo, G . A. Ormondroyd

Abstract:

Treatment of wood with chemicals in order to prolong its service life may prove difficult in some refractory wood species. This impermeability in wood is usually due to biochemical changes which occur during heartwood formation. Bioincision, which is a short-term, controlled microbial decomposition of wood, is one of the promising approaches capable of improving the amenability of refractory wood to chemical treatments. Gmelina Arborea, a mainstay timber species in Nigeria, has impermeable heartwood due to the excessive tyloses which occlude its vessels. Therefore, the chemical uptake and penetration in Gmelina arborea heartwood bioincised with Inonotus dryophilus fungus was investigated. Five mature Gmelina Arborea trees were harvested at the Departmental plantation in Ajibode, Ibadan, Nigeria and a bolt of 300 cm was obtained from the basal portion of each tree. The heartwood portion of the bolts was extracted and converted into dimensions 20 mm x 20 mm x 60 mm and subsequently conditioned (200C at 65% Relative Humidity). Twenty wood samples each were bioincised with the white-rot fungus Inonotus dryophilus (ID, 999) for 3, 5, 7 and 9 weeks using standard procedure, while a set of sterile control samples were prepared. Ten of each bioincised and control sample were pressure-treated with 5% tanalith preservative, while the other ten of each bioincised and control samples were pressure-treated with a liquid dye for easy traceability of the chemical in the wood, both using a full cell treatment process. The bioincised and control samples were evaluated for their Weight Loss before chemical treatment (WL, %), Preservative Absorption (PA, Kg/m3), Preservative Retention (PR, Kg/m3), Axial Absorption (AA, Kg/m3), Lateral Absorption (LA, Kg/m3), Axial Penetration Depth (APD, mm), Radial Penetration Depth (RPD, mm), and Tangential Penetration Depth (TPD, mm). The data obtained were analyzed using ANOVA at α0.05. Results show that the weight loss was least in the samples bioincised for three weeks (0.09%) and highest after 7 weeks of bioincision (0.48%). The samples bioincised for 3 weeks had the least PA (106.72 Kg/m3) and PR (5.87 Kg/m3), while the highest PA (134.9 Kg/m3) and PR were observed after 7 weeks of bioincision (7.42 Kg/m3). The AA ranged from 27.28 Kg/m3 (3 weeks) to 67.05 Kg/m3 (5 weeks), while the LA was least after 5 weeks of incubation (28.1 Kg/m3) and highest after 9 weeks (71.74 Kg/m3). Significantly lower APD was observed in control samples (6.97 mm) than in the samples bioincised after 9weeks (19.22 mm). The RPD increased from 0.08 mm (control samples) to 3.48 mm (5 weeks), while TPD ranged from 0.38 mm (control samples) to 0.63 mm (9 weeks), implying that liquid flow in the wood was predominantly through the axial pathway. Bioincising G. arborea heartwood with I. dryophilus fungus for 9 weeks is capable of enhancing chemical uptake and deeper penetration of chemicals in the wood through the degradation of the occluding vessel tyloses, which is accompanied by a minimal degradation of the polymeric wood constituents.

Keywords: Bioincision, chemical uptake, penetration depth, refractory wood, tyloses

Procedia PDF Downloads 106
874 Talking to Ex-Islamic State Fighters inside Iraqi Prisons: An Arab Woman’s Perspective on Radicalization and Deradicalization

Authors: Suha Hassen

Abstract:

This research aims to untangle the complexity of conducting face-to-face interviews with 80 ex-Islamic State fighters, encompassing three groups: local Iraqis, Arabs from the Middle East, and international fighters from around the globe. Each interview lasted approximately two hours and was conducted in both Arabic and English, focusing on the motivations behind joining the Islamic State and the pathways and mechanisms facilitating their involvement. The phenomenon of individuals joining violent Islamist extremist and jihadist organizations is multifaceted, drawing substantial attention within terrorism and security studies. Organizations such as the Islamic State, Hezbollah, Hamas, and Al-Qaeda pose formidable threats to international peace and stability, employing various terrorist tactics for radicalization and recruitment. However, significant gaps remain in current studies, including a lack of firsthand accounts, an inadequate understanding of original narratives (religious and linguistic) due to abstraction and misinterpretation of motivations, and a lack of Arab women's perspectives from the region. This study addresses these gaps by exploring the cultural, religious, and historical complexities that shape the narratives of ex-ISIS fighters. The paper will showcase three distinct cases: one French prisoner, one Moroccan fighter, and a local Iraqi, illustrating the diverse motivations and experiences that contribute to joining and leaving extremist groups. The findings provide valuable insights into the nuanced dynamics of radicalization, emphasizing the need for gender-sensitive approaches in counter-terrorism strategies and deradicalization programs. Importantly, this research has practical implications for counter-narrative policies and early-stage prevention of radicalization. By understanding the narratives used by ex-fighters, policymakers can develop targeted counter-narratives that disrupt recruitment efforts. Additionally, insights into the mechanisms of radicalization can inform early intervention programs, helping to identify and support at-risk individuals before they become entrenched in extremist ideologies. Ultimately, this research enhances our understanding of the individual experiences of ex-ISIS fighters and calls for a reevaluation of the narratives surrounding women’s roles in extremism and recovery.

Keywords: Arab women in extremism, counter-narrative policy, ex-ISIS fighters in Iraq, radicalization

Procedia PDF Downloads 21
873 Development of Microsatellite Markers for Dalmatian Pyrethrum Using Next-Generation Sequencing

Authors: Ante Turudic, Filip Varga, Zlatko Liber, Jernej Jakse, Zlatko Satovic, Ivan Radosavljevic, Martina Grdisa

Abstract:

Microsatellites (SSRs) are highly informative repetitive sequences of 2-6 base pairs, which are the most used molecular markers in assessing the genetic diversity of plant species. Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir./ Sch. Bip) is an outcrossing diploid (2n = 18) endemic to the eastern Adriatic coast and source of the natural insecticide pyrethrin. Due to the high repetitiveness and large size of the genome (haploid genome size of 9,58 pg), previous attempts to develop microsatellite markers using the standard methods were unsuccessful. A next-generation sequencing (NGS) approach was applied on genomic DNA extracted from fresh leaves of Dalmatian pyrethrum. The sequencing was conducted using NovaSeq6000 Illumina sequencer, after which almost 400 million high-quality paired-end reads were obtained, with a read length of 150 base pairs. Short reads were assembled by combining two approaches; (1) de-novo assembly and (2) joining of overlapped pair-end reads. In total, 6.909.675 contigs were obtained, with the contig average length of 249 base pairs. Of the resulting contigs, 31.380 contained one or multiple microsatellite sequences, in total 35.556 microsatellite loci were identified. Out of detected microsatellites, dinucleotide repeats were the most frequent, accounting for more than half of all microsatellites identifies (21,212; 59.7%), followed by trinucleotide repeats (9,204; 25.9%). Tetra-, penta- and hexanucleotides had similar frequency of 1,822 (5.1%), 1,472 (4.1%), and 1,846 (5.2%), respectively. Contigs containing microsatellites were further filtered by SSR pattern type, transposon occurrences, assembly characteristics, GC content, and the number of occurrences against the draft genome of T. cinerariifolium published previously. After the selection process, 50 microsatellite loci were used for primer design. Designed primers were tested on samples from five distinct populations, and 25 of them showed a high degree of polymorphism. The selected loci were then genotyped on 20 samples belonging to one population resulting in 17 microsatellite markers. Availability of codominant SSR markers will significantly improve the knowledge on population genetic diversity and structure as well as complex genetics and biochemistry of this species. Acknowledgment: This work has been fully supported by the Croatian Science Foundation under the project ‘Genetic background of Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir/ Sch. Bip.) insecticidal potential’ - (PyrDiv) (IP-06-2016-9034).

Keywords: genome assembly, NGS, SSR, Tanacetum cinerariifolium

Procedia PDF Downloads 131
872 Cross-Comparison between Land Surface Temperature from Polar and Geostationary Satellite over Heterogenous Landscape: A Case Study in Hong Kong

Authors: Ibrahim A. Adeniran, Rui F. Zhu, Man S. Wong

Abstract:

Owing to the insufficiency in the spatial representativeness and continuity of in situ temperature measurements from weather stations (WS), the use of temperature measurement from WS for large-range diurnal analysis in heterogenous landscapes has been limited. This has made the accurate estimation of land surface temperature (LST) from remotely sensed data more crucial. Moreover, the study of dynamic interaction between the atmosphere and the physical surface of the Earth could be enhanced at both annual and diurnal scales by using optimal LST data derived from satellite sensors. The tradeoff between the spatial and temporal resolution of LSTs from satellite’s thermal infrared sensors (TIRS) has, however, been a major challenge, especially when high spatiotemporal LST data are recommended. It is well-known from existing literature that polar satellites have the advantage of high spatial resolution, while geostationary satellites have a high temporal resolution. Hence, this study is aimed at designing a framework for the cross-comparison of LST data from polar and geostationary satellites in a heterogeneous landscape. This could help to understand the relationship between the LST estimates from the two satellites and, consequently, their integration in diurnal LST analysis. Landsat-8 satellite data will be used as the representative of the polar satellite due to the availability of its long-term series, while the Himawari-8 satellite will be used as the data source for the geostationary satellite because of its improved TIRS. For the study area, Hong Kong Special Administrative Region (HK SAR) will be selected; this is due to the heterogeneity in the landscape of the region. LST data will be retrieved from both satellites using the Split window algorithm (SWA), and the resulting data will be validated by comparing satellite-derived LST data with temperature data from automatic WS in HK SAR. The LST data from the satellite data will then be separated based on the land use classification in HK SAR using the Global Land Cover by National Mapping Organization version3 (GLCNMO 2013) data. The relationship between LST data from Landsat-8 and Himawari-8 will then be investigated based on the land-use class and over different seasons of the year in order to account for seasonal variation in their relationship. The resulting relationship will be spatially and statistically analyzed and graphically visualized for detailed interpretation. Findings from this study will reveal the relationship between the two satellite data based on the land use classification within the study area and the seasons of the year. While the information provided by this study will help in the optimal combination of LST data from Polar (Landsat-8) and geostationary (Himawari-8) satellites, it will also serve as a roadmap in the annual and diurnal urban heat (UHI) analysis in Hong Kong SAR.

Keywords: automatic weather station, Himawari-8, Landsat-8, land surface temperature, land use classification, split window algorithm, urban heat island

Procedia PDF Downloads 73
871 Controlled Drug Delivery System for Delivery of Poor Water Soluble Drugs

Authors: Raj Kumar, Prem Felix Siril

Abstract:

The poor aqueous solubility of many pharmaceutical drugs and potential drug candidates is a big challenge in drug development. Nanoformulation of such candidates is one of the major solutions for the delivery of such drugs. We initially developed the evaporation assisted solvent-antisolvent interaction (EASAI) method. EASAI method is use full to prepared nanoparticles of poor water soluble drugs with spherical morphology and particles size below 100 nm. However, to further improve the effect formulation to reduce number of dose and side effect it is important to control the delivery of drugs. However, many drug delivery systems are available. Among the many nano-drug carrier systems, solid lipid nanoparticles (SLNs) have many advantages over the others such as high biocompatibility, stability, non-toxicity and ability to achieve controlled release of drugs and drug targeting. SLNs can be administered through all existing routes due to high biocompatibility of lipids. SLNs are usually composed of lipid, surfactant and drug were encapsulated in lipid matrix. A number of non-steroidal anti-inflammatory drugs (NSAIDs) have poor bioavailability resulting from their poor aqueous solubility. In the present work, SLNs loaded with NSAIDs such as Nabumetone (NBT), Ketoprofen (KP) and Ibuprofen (IBP) were successfully prepared using different lipids and surfactants. We studied and optimized experimental parameters using a number of lipids, surfactants and NSAIDs. The effect of different experimental parameters such as lipid to surfactant ratio, volume of water, temperature, drug concentration and sonication time on the particles size of SLNs during the preparation using hot-melt sonication was studied. It was found that particles size was directly proportional to drug concentration and inversely proportional to surfactant concentration, volume of water added and temperature of water. SLNs prepared at optimized condition were characterized thoroughly by using different techniques such as dynamic light scattering (DLS), field emission scanning electron microscopy (FESEM), transmission electron microscopy (TEM), atomic force microscopy (AFM), X-ray diffraction (XRD) and differential scanning calorimetry and Fourier transform infrared spectroscopy (FTIR). We successfully prepared the SLN of below 220 nm using different lipids and surfactants combination. The drugs KP, NBT and IBP showed 74%, 69% and 53% percentage of entrapment efficiency with drug loading of 2%, 7% and 6% respectively in SLNs of Campul GMS 50K and Gelucire 50/13. In-vitro drug release profile of drug loaded SLNs is shown that nearly 100% of drug was release in 6 h.

Keywords: nanoparticles, delivery, solid lipid nanoparticles, hot-melt sonication, poor water soluble drugs, solubility, bioavailability

Procedia PDF Downloads 312
870 Creating Renewable Energy Investment Portfolio in Turkey between 2018-2023: An Approach on Multi-Objective Linear Programming Method

Authors: Berker Bayazit, Gulgun Kayakutlu

Abstract:

The World Energy Outlook shows that energy markets will substantially change within a few forthcoming decades. First, determined action plans according to COP21 and aim of CO₂ emission reduction have already impact on policies of countries. Secondly, swiftly changed technological developments in the field of renewable energy will be influential upon medium and long-term energy generation and consumption behaviors of countries. Furthermore, share of electricity on global energy consumption is to be expected as high as 40 percent in 2040. Electrical vehicles, heat pumps, new electronical devices and digital improvements will be outstanding technologies and innovations will be the testimony of the market modifications. In order to meet highly increasing electricity demand caused by technologies, countries have to make new investments in the field of electricity production, transmission and distribution. Specifically, electricity generation mix becomes vital for both prevention of CO₂ emission and reduction of power prices. Majority of the research and development investments are made in the field of electricity generation. Hence, the prime source diversity and source planning of electricity generation are crucial for improving the wealth of citizen life. Approaches considering the CO₂ emission and total cost of generation, are necessary but not sufficient to evaluate and construct the product mix. On the other hand, employment and positive contribution to macroeconomic values are important factors that have to be taken into consideration. This study aims to constitute new investments in renewable energies (solar, wind, geothermal, biogas and hydropower) between 2018-2023 under 4 different goals. Therefore, a multi-objective programming model is proposed to optimize the goals of minimizing the CO₂ emission, investment amount and electricity sales price while maximizing the total employment and positive contribution to current deficit. In order to avoid the user preference among the goals, Dinkelbach’s algorithm and Guzel’s approach have been combined. The achievements are discussed with comparison to the current policies. Our study shows that new policies like huge capacity allotment might be discussible although obligation for local production is positive. The improvements in grid infrastructure and re-design support for the biogas and geothermal can be recommended.

Keywords: energy generation policies, multi-objective linear programming, portfolio planning, renewable energy

Procedia PDF Downloads 244
869 Patient Agitation and Violence in Medical-Surgical Settings at BronxCare Hospital, Before and During COVID-19 Pandemic; A Retrospective Chart Review

Authors: Soroush Pakniyat-Jahromi, Jessica Bucciarelli, Souparno Mitra, Neda Motamedi, Ralph Amazan, Samuel Rothman, Jose Tiburcio, Douglas Reich, Vicente Liz

Abstract:

Violence is defined as an act of physical force that is intended to cause harm and may lead to physical and/or psychological damage. Violence toward healthcare workers (HCWs) is more common in psychiatric settings, emergency departments, and nursing homes; however, healthcare workers in medical setting are not spared from such events. Workplace violence has a huge burden in the healthcare industry and has a major impact on the physical and mental wellbeing of staff. The purpose of this study is to compare the prevalence of patient agitation and violence in medical-surgical settings in BronxCare Hospital (BCH) Bronx, New York, one year before and during the COVID-19 pandemic. Data collection occurred between June 2021 and August 2021, while the sampling time was from 2019 to 2021. The data were separated into two separate time categories: pre-COVID-19 (03/2019-03/2020) and COVID-19 (03/2020-03/2021). We created frequency tables for 19 variables. We used a chi-square test to determine a variable's statistical significance. We tested all variables against “restraint type”, determining if a patient was violent or became violent enough to restrain. The restraint types were “chemical”, “physical”, or both. This analysis was also used to determine if there was a statistical difference between the pre-COVID-19 and COVID-19 timeframes. Our data shows that there was an increase in incidents of violence in COVID-19 era (03/2020-03/2021), with total of 194 (62.8%) reported events, compared to pre COVID-19 era (03/2019-03/2020) with 115 (37.2%) events (p: 0.01). Our final analysis, completed using a chi-square test, determined the difference in violence in patients between pre-COVID-19 and COVID-19 era. We then tested the violence marker against restraint type. The result was statistically significant (p: 0.01). This is the first paper to systematically review the prevalence of violence in medical-surgical units in a hospital in New York, pre COVID-19 and during the COVID-19 era. Our data is in line with the global trend of increased prevalence of patient agitation and violence in medical settings during the COVID-19 pandemic. Violence and its management is a challenge in healthcare settings, and the COVID-19 pandemic has brought to bear a complexity of circumstances, which may have increased its incidence. It is important to identify and teach healthcare workers the best preventive approaches in dealing with patient agitation, to decrease the number of restraints in medical settings, and to create a less restrictive environment to deliver care.

Keywords: COVID-19 pandemic, patient agitation, restraints, violence

Procedia PDF Downloads 143
868 Raman Tweezers Spectroscopy Study of Size Dependent Silver Nanoparticles Toxicity on Erythrocytes

Authors: Surekha Barkur, Aseefhali Bankapur, Santhosh Chidangil

Abstract:

Raman Tweezers technique has become prevalent in single cell studies. This technique combines Raman spectroscopy which gives information about molecular vibrations, with optical tweezers which use a tightly focused laser beam for trapping the single cells. Thus Raman Tweezers enabled researchers analyze single cells and explore different applications. The applications of Raman Tweezers include studying blood cells, monitoring blood-related disorders, silver nanoparticle-induced stress, etc. There is increased interest in the toxic effect of nanoparticles with an increase in the various applications of nanoparticles. The interaction of these nanoparticles with the cells may vary with their size. We have studied the effect of silver nanoparticles of sizes 10nm, 40nm, and 100nm on erythrocytes using Raman Tweezers technique. Our aim was to investigate the size dependence of the nanoparticle effect on RBCs. We used 785nm laser (Starbright Diode Laser, Torsana Laser Tech, Denmark) for both trapping and Raman spectroscopic studies. 100 x oil immersion objectives with high numerical aperture (NA 1.3) is used to focus the laser beam into a sample cell. The back-scattered light is collected using the same microscope objective and focused into the spectrometer (Horiba Jobin Vyon iHR320 with 1200grooves/mm grating blazed at 750nm). Liquid nitrogen cooled CCD (Symphony CCD-1024x256-OPEN-1LS) was used for signal detection. Blood was drawn from healthy volunteers in vacutainer tubes and centrifuged to separate the blood components. 1.5 ml of silver nanoparticles was washed twice with distilled water leaving 0.1 ml silver nanoparticles in the bottom of the vial. The concentration of silver nanoparticles is 0.02mg/ml so the 0.03mg of nanoparticles will be present in the 0.1 ml nanoparticles obtained. The 25 ul of RBCs were diluted in 2 ml of PBS solution and then treated with 50 ul (0.015mg) of nanoparticles and incubated in CO2 incubator. Raman spectroscopic measurements were done after 24 hours and 48 hours of incubation. All the spectra were recorded with 10mW laser power (785nm diode laser), 60s of accumulation time and 2 accumulations. Major changes were observed in the peaks 565 cm-1, 1211 cm-1, 1224 cm-1, 1371 cm-1, 1638 cm-1. A decrease in intensity of 565 cm-1, increase in 1211 cm-1 with a reduction in 1224 cm-1, increase in intensity of 1371 cm-1 also peak disappearing at 1635 cm-1 indicates deoxygenation of hemoglobin. Nanoparticles with higher size were showing maximum spectral changes. Lesser changes observed in case of 10nm nanoparticle-treated erythrocyte spectra.

Keywords: erythrocytes, nanoparticle-induced toxicity, Raman tweezers, silver nanoparticles

Procedia PDF Downloads 293
867 Digitization and Economic Growth in Africa: The Role of Financial Sector Development

Authors: Abdul Ganiyu Iddrisu, Bei Chen

Abstract:

Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth and reducing poverty. Yet the compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, and low-income flows, among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector. However, empirical evidence on the digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We, therefore, argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa, focusing on the role of digitization and financial sector development. First, we assess how digitization influences financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to the private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improve economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.

Keywords: digitalization, financial sector development, Africa, economic growth

Procedia PDF Downloads 140
866 Immersive Environment as an Occupant-Centric Tool for Architecture Criticism and Architectural Education

Authors: Golnoush Rostami, Farzam Kharvari

Abstract:

In recent years, developments in the field of architectural education have resulted in a shift from conventional teaching methods to alternative state-of-the-art approaches in teaching methods and strategies. Criticism in architecture has been a key player both in the profession and education, but it has been mostly offered by renowned individuals. Hence, not only students or other professionals but also critics themselves may not have the option to experience buildings and rely on available 2D materials, such as images and plans, that may not result in a holistic understanding and evaluation of buildings. On the other hand, immersive environments provide students and professionals the opportunity to experience buildings virtually and reflect their evaluation by experiencing rather than judging based on 2D materials. Therefore, the aim of this study is to compare the effect of experiencing buildings in immersive environments and 2D drawings, including images and plans, on architecture criticism and architectural education. As a result, three buildings that have parametric brick facades were studied through 2D materials and in Unreal Engine v. 24 as an immersive environment among 22 architecture students that were selected using convenient sampling and were divided into two equal groups using simple random sampling. This study used mixed methods, including quantitative and qualitative methods; the quantitative section was carried out by a questionnaire, and deep interviews were used for the qualitative section. A questionnaire was developed for measuring three constructs, including privacy regulation based on Altman’s theory, the sufficiency of illuminance levels in the building, and the visual status of the view (visually appealing views based on obstructions that may have been caused by facades). Furthermore, participants had the opportunity to reflect their understanding and evaluation of the buildings in individual interviews. Accordingly, the collected data from the questionnaires were analyzed using independent t-test and descriptive analyses in IBM SPSS Statistics v. 26, and interviews were analyzed using the content analysis method. The results of the interviews showed that the participants who experienced the buildings in the immersive environment were able to have a thorough and more precise evaluation of the buildings in comparison to those who studied them through 2D materials. Moreover, the analyses of the respondents’ questionnaires revealed that there were statistically significant differences between measured constructs among the two groups. The outcome of this study suggests that integrating immersive environments into the profession and architectural education as an effective and efficient tool for architecture criticism is vital since these environments allow users to have a holistic evaluation of buildings for vigorous and sound criticism.

Keywords: immersive environments, architecture criticism, architectural education, occupant-centric evaluation, pre-occupancy evaluation

Procedia PDF Downloads 134
865 A Study of Bilingual Development of a Mandarin and English Bilingual Preschool Child from China to Australia

Authors: Qiang Guo, Ruying Qi

Abstract:

This project aims to trace the developmental patterns of a child's Mandarin and English from China to Australia from age 3; 03 till 5; 06. In childhood bilingual studies, there is an assumption that age 3 is the dividing line between simultaneous bilinguals and sequential bilinguals. Determining similarities and differences between Bilingual First Language Acquisition, Early Second Language Acquisition, and Second Language Acquisition is of great theoretical significance. Studies on Bilingual First Language Acquisition, hereafter, BFLA in the past three decades have shown that the grammatical development of bilingual children progresses through the same developmental trajectories as their monolingual counterparts. Cross-linguistic interaction does not show changes of the basic grammatical knowledge, even in the weaker language. While BFLA studies show consistent results under the conditions of adequate input and meaningful interactional context, the research findings of Early Second Language Acquisition (ESLA) have demonstrated that this cohort proceeds their early English differently from both BFLA and SLA. The different development could be attributed to the age of migration, input pattern, and their Environmental Languages (Lε). In the meantime, the dynamic relationship between the two languages is an issue to invite further attention. The present study attempts to fill this gap. The child in this case study started acquiring L1 Mandarin from birth in China, where the environmental language (Lε) coincided with L1 Mandarin. When she migrated to Australia at 3;06, where the environmental language (Lε) was L2 English, her Mandarin exposure was reduced. On the other hand, she received limited English input starting from 1; 02 in China, where the environmental language (Lε) was L1 Mandarin, a non-English environment. When she relocated to Australia at 3; 06, where the environmental language (Lε) coincided with L2 English, her English exposure significantly increased. The child’s linguistic profile provides an opportunity to explore: (1) What does the child’s English developmental route look like? (2) What does the L1 Mandarin developmental pattern look like in different environmental languages? (3) How do input and environmental language interact in shaping the bilingual child’s linguistic repertoire? In order to answer these questions, two linguistic areas are selected as the focus of the investigation, namely, subject realization and wh-questions. The chosen areas are contrastive in structure but perform the same semantic functions in the two linguistically distant languages and can serve as an ideal testing ground for exploring the developmental path in the two languages. The longitudinal case study adopts a combined approach of qualitative and quantitative analysis. Two years’ Mandarin and English data are examined, and comparisons are made with age-matched monolinguals in each language in CHILDES. To the author’s best knowledge, this study is the first of this kind examining a Mandarin-English bilingual child's bilingual development at a critical age, in different input patterns, and in different environmental languages (Lε). It also expands the scope of the theory of Lε, adding empirical evidence on the relationship between input and Lε in bilingual acquisition.

Keywords: bilingual development, age, input, environmental language (Le)

Procedia PDF Downloads 150
864 Flipped Classroom in a European Public Health Program: The Need for Students' Self-Directness

Authors: Nynke de Jong, Inge G. P. Duimel-Peeters

Abstract:

The flipped classroom as an instructional strategy and a type of blended learning that reverses the traditional learning environment by delivering instructional content, off- and online, in- and outside the classroom, has been implemented in a 4-weeks module focusing on ageing in Europe at the Maastricht University. The main aim regarding the organization of this module was implementing flipped classroom-principles in order to create meaningful learning opportunities, while educational technologies are used to deliver content outside of the classroom. Technologies used in this module were an online interactive real time lecture from England, two interactive face-to-face lectures with visual supports, one group session including role plays and team-based learning meetings. The cohort of 2015-2016, using educational technologies, was compared with the cohort of 2014-2015 on module evaluation such as organization and instructiveness of the module, who studied the same content, although conforming the problem-based educational strategy, i.e. educational base of the Maastricht University. The cohort of 2015-2016 with its specific organization, was also more profound evaluated on outcomes as (1) experienced duration of the lecture by students, (2) experienced content of the lecture, (3) experienced the extent of the interaction and (4) format of lecturing. It was important to know how students reflected on duration and content taken into account their background knowledge so far, in order to distinguish between sufficient enough regarding prior knowledge and therefore challenging or not fitting into the course. For the evaluation, a structured online questionnaire was used, whereby above mentioned topics were asked for to evaluate by scoring them on a 4-point Likert scale. At the end, there was room for narrative feedback so that interviewees could express more in detail, if they wanted, what they experienced as good or not regarding the content of the module and its organization parts. Eventually, the response rate of the evaluation was lower than expected (54%), however, due to written feedback and exam scores, we dare to state that it gives a good and reliable overview that encourages to work further on it. Probably, the response rate may be explained by the fact that resit students were included as well, and that there maybe is too much evaluation as some time points in the program. However, overall students were excited about the organization and content of the module, but the level of self-directed behavior, necessary for this kind of educational strategy, was too low. They need to be more trained in self-directness, therefore the module will be simplified in 2016-2017 with more clear and fewer topics and extra guidance (step by step procedure). More specific information regarding the used technologies will be explained at the congress, as well as the outcomes (min and max rankings, mean and standard deviation).

Keywords: blended learning, flipped classroom, public health, self-directness

Procedia PDF Downloads 219
863 Promoting 21st Century Skills through Telecollaborative Learning

Authors: Saliha Ozcan

Abstract:

Technology has become an integral part of our lives, aiding individuals in accessing higher order competencies, such as global awareness, creativity, collaborative problem solving, and self-directed learning. Students need to acquire these competencies, often referred to as 21st century skills, in order to adapt to a fast changing world. Today, an ever-increasing number of schools are exploring how engagement through telecollaboration can support language learning and promote 21st century skill development in classrooms. However, little is known regarding how telecollaboration may influence the way students acquire 21st century skills. In this paper, we aim to shed light to the potential implications of telecollaborative practices in acquisition of 21st century skills. In our context, telecollaboration, which might be carried out in a variety of settings both synchronously or asynchronously, is considered as the process of communicating and working together with other people or groups from different locations through online digital tools or offline activities to co-produce a desired work output. The study presented here will describe and analyse the implementation of a telecollaborative project between two high school classes, one in Spain and the other in Sweden. The students in these classes were asked to carry out some joint activities, including creating an online platform, aimed at raising awareness of the situation of the Syrian refugees. We conduct a qualitative study in order to explore how language, culture, communication, and technology merge into the co-construction of knowledge, as well as supporting the attainment of the 21st century skills needed for network-mediated communication. To this end, we collected a significant amount of audio-visual data, including video recordings of classroom interaction and external Skype meetings. By analysing this data, we verify whether the initial pedagogical design and intended objectives of the telecollaborative project coincide with what emerges from the actual implementation of the tasks. Our findings indicate that, as well as planned activities, unplanned classroom interactions may lead to acquisition of certain 21st century skills, such as collaborative problem solving and self-directed learning. This work is part of a wider project (KONECT, EDU2013-43932-P; Spanish Ministry of Economy and Finance), which aims to explore innovative, cross-competency based teaching that can address the current gaps between today’s educational practices and the needs of informed citizens in tomorrow’s interconnected, globalised world.

Keywords: 21st century skills, telecollaboration, language learning, network mediated communication

Procedia PDF Downloads 125
862 Obesity and Cancer: Current Scientific Evidence and Policy Implications

Authors: Martin Wiseman, Rachel Thompson, Panagiota Mitrou, Kate Allen

Abstract:

Since 1997 World Cancer Research Fund (WCRF) International and the American Institute for Cancer Research (AICR) have been at the forefront of synthesising and interpreting the accumulated scientific literature on the link between diet, nutrition, physical activity and cancer, and deriving evidence-based Cancer Prevention Recommendations. The 2007 WCRF/AICR 2nd Expert Report was a landmark in the analysis of evidence linking diet, body weight and physical activity to cancer and led to the establishment of the Continuous Update Project (CUP). In 2018, as part of the CUP, WCRF/AICR will publish a new synthesis of the current evidence and update the Cancer Prevention Recommendations. This will ensure that everyone - from policymakers and health professionals to members of the public - has access to the most up-to-date information on how to reduce the risk of developing cancer. Overweight and obesity play a significant role in cancer risk, and rates of both are increasing in many parts of the world. This session will give an overview of new evidence relating obesity to cancer since the 2007 report. For example, since the 2007 Report, the number of cancers for which obesity is judged to be a contributory cause has increased from seven to eleven. The session will also shed light on the well-established mechanisms underpinning obesity and cancer links. Additionally, the session will provide an overview of diet and physical activity related factors that promote positive energy imbalance, leading to overweight and obesity. Finally, the session will highlight how policy can be used to address overweight and obesity at a population level, using WCRF International’s NOURISHING Framework. NOURISHING formalises a comprehensive package of policies to promote healthy diets and reduce obesity and non-communicable diseases; it is a tool for policymakers to identify where action is needed and assess if an approach is sufficiently comprehensive. The framework brings together ten policy areas across three domains: food environment, food system, and behaviour change communication. The framework is accompanied by a regularly updated database providing an extensive overview of implemented government policy actions from around the world. In conclusion, the session will provide an overview of obesity and cancer, highlighting the links seen in the epidemiology and exploring the mechanisms underpinning these, as well as the influences that help determine overweight and obesity. Finally, the session will illustrate policy approaches that can be taken to reduce overweight and obesity worldwide.

Keywords: overweight, obesity, nutrition, cancer, mechanisms, policy

Procedia PDF Downloads 157
861 Advanced Palliative Aquatics Care Multi-Device AuBento for Symptom and Pain Management by Sensorial Integration and Electromagnetic Fields: A Preliminary Design Study

Authors: J. F. Pollo Gaspary, F. Peron Gaspary, E. M. Simão, R. Concatto Beltrame, G. Orengo de Oliveira, M. S. Ristow Ferreira, J.C. Mairesse Siluk, I. F. Minello, F. dos Santos de Oliveira

Abstract:

Background: Although palliative care policies and services have been developed, research in this area continues to lag. An integrated model of palliative care is suggested, which includes complementary and alternative services aimed at improving the well-being of patients and their families. The palliative aquatics care multi-device (AuBento) uses several electromagnetic techniques to decrease pain and promote well-being through relaxation and interaction among patients, specialists, and family members. Aim: The scope of this paper is to present a preliminary design study of a device capable of exploring the various existing theories on the biomedical application of magnetic fields. This will be achieved by standardizing clinical data collection with sensory integration, and adding new therapeutic options to develop an advanced palliative aquatics care, innovating in symptom and pain management. Methods: The research methodology was based on the Work Package Methodology for the development of projects, separating the activities into seven different Work Packages. The theoretical basis was carried out through an integrative literature review according to the specific objectives of each Work Package and provided a broad analysis, which, together with the multiplicity of proposals and the interdisciplinarity of the research team involved, generated consistent and understandable complex concepts in the biomedical application of magnetic fields for palliative care. Results: Aubento ambience was idealized with restricted electromagnetic exposure (avoiding data collection bias) and sensory integration (allowing relaxation associated with hydrotherapy, music therapy, and chromotherapy or like floating tank). This device has a multipurpose configuration enabling classic or exploratory options on the use of the biomedical application of magnetic fields at the researcher's discretion. Conclusions: Several patients in diverse therapeutic contexts may benefit from the use of magnetic fields or fluids, thus validating the stimuli to clinical research in this area. A device in controlled and multipurpose environments may contribute to standardizing research and exploring new theories. Future research may demonstrate the possible benefits of the aquatics care multi-device AuBento to improve the well-being and symptom control in palliative care patients and their families.

Keywords: advanced palliative aquatics care, magnetic field therapy, medical device, research design

Procedia PDF Downloads 128
860 Gait Analysis in Total Knee Arthroplasty

Authors: Neeraj Vij, Christian Leber, Kenneth Schmidt

Abstract:

Introduction: Total knee arthroplasty is a common procedure. It is well known that the biomechanics of the knee do not fully return to their normal state. Motion analysis has been used to study the biomechanics of the knee after total knee arthroplasty. The purpose of this scoping review is to summarize the current use of gait analysis in total knee arthroplasty and to identify the preoperative motion analysis parameters for which a systematic review aimed at determining the reliability and validity may be warranted. Materials and Methods: This IRB-exempt scoping review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist strictly. Five search engines were searched for a total of 279 articles. Articles underwent a title and abstract screening process followed by full-text screening. Included articles were placed in the following sections: the role of gait analysis as a research tool for operative decisions, other research applications for motion analysis in total knee arthroplasty, gait analysis as a tool in predicting radiologic outcomes, gait analysis as a tool in predicting clinical outcomes. Results: Eleven articles studied gait analysis as a research tool in studying operative decisions. Motion analysis is currently used to study surgical approaches, surgical techniques, and implant choice. Five articles studied other research applications for motion analysis in total knee arthroplasty. Other research applications for motion analysis currently include studying the role of the unicompartmental knee arthroplasty and novel physical therapy protocols aimed at optimizing post-operative care. Two articles studied motion analysis as a tool for predicting radiographic outcomes. Preoperative gait analysis has identified parameters than can predict postoperative tibial component migration. 15 articles studied motion analysis in conjunction with clinical scores. Conclusions: There is a broad range of applications within the research domain of total knee arthroplasty. The potential application is likely larger. However, the current literature is limited by vague definitions of ‘gait analysis’ or ‘motion analysis’ and a limited number of articles with preoperative and postoperative functional and clinical measures. Knee adduction moment, knee adduction impulse, total knee range of motion, varus angle, cadence, stride length, and velocity have the potential for integration into composite clinical scores. A systematic review aimed at determining the validity, reliability, sensitivities, and specificities of these variables is warranted.

Keywords: motion analysis, joint replacement, patient-reported outcomes, knee surgery

Procedia PDF Downloads 94
859 The Ideal for Building Reservior Under the Ground in Mekong Delta in Vietnam

Authors: Huu Hue Van

Abstract:

The Mekong Delta is the region in southwestern Vietnam where the Mekong River approaches and flow into the sea through a network of distributaries. The Climate Change Research Institute at University of Can Tho, in studying the possible consequences of climate change, has predicted that, many provinces in the Mekong Delta will be flooded by the year 2030. The Mekong Delta lacks fresh water in the dry season. Being served for daily life, industry and agriculture in the dry season, the water is mainly taken from layers of soil contained water under the ground (aquifers) depleted water; the water level in aquifers have decreased. Previously, the Mekong Delta can withstand two bad scenarios in the future: 1) The Mekong Delta will be submerged into the sea again: Due to subsidence of the ground (over-exploitation of groundwater), subsidence of constructions because of the low groundwater level (10 years ago, some of constructions were built on the foundation of Melaleuca poles planted in Mekong Delta, Melaleuca poles have to stay in saturated soil layer fully, if not, they decay easyly; due to the top of Melaleuca poles are higher than the groundwater level, the top of Melaleuca poles will decay and cause subsidence); erosion the river banks (because of the hydroelectric dams in the upstream of the Mekong River is blocking the flow, reducing the concentration of suspended substances in the flow caused erosion the river banks) and the delta will be flooded because of sea level rise (climate change). 2) The Mekong Delta will be deserted: People will migrate to other places to make a living because of no planting due to alum capillary (In Mekong Delta, there is a layer of alum soil under the ground, the elevation of groundwater level is lower than the the elevation of layer of alum soil, alum will be capillary to the arable soil layer); there is no fresh water for cultivation and daily life (because of saline intrusion and groundwater depletion in the aquifers below). Mekong Delta currently has about seven aquifers below with a total depth about 500 m. The water mainly has exploited in the middle - upper Pleistocene aquifer (qp2-3). The major cause of two bad scenarios in the future is over-exploitation of water in aquifers. Therefore, studying and building water reservoirs in seven aquifers will solve many pressing problems such as preventing subsidence, providing water for the whole delta, especially in coastal provinces, favorable to nature, saving land ( if we build the water lake on the surface of the delta, we will need a lot of land), pollution limitation (because when building some hydraulic structures for preventing the salt instrutions and for storing water in the lake on the surface, we cause polluted in the lake)..., It is necessary to build a reservoir under the ground in aquifers in the Mekong Delta. The super-sized reservoir will contribute to the existence and development of the Mekong Delta.

Keywords: aquifers, aquifers storage, groundwater, land subsidence, underground reservoir

Procedia PDF Downloads 85