Search results for: visual media and computer network etc
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11050

Search results for: visual media and computer network etc

250 The Legal and Regulatory Gaps of Blockchain-Enabled Energy Prosumerism

Authors: Karisma Karisma, Pardis Moslemzadeh Tehrani

Abstract:

This study aims to conduct a high-level strategic dialogue on the lack of consensus, consistency, and legal certainty regarding blockchain-based energy prosumerism so that appropriate institutional and governance structures can be put in place to address the inadequacies and gaps in the legal and regulatory framework. The drive to achieve national and global decarbonization targets is a driving force behind climate goals and policies under the Paris Agreement. In recent years, efforts to ‘demonopolize’ and ‘decentralize’ energy generation and distribution have driven the energy transition toward decentralized systems, invoking concepts such as ownership, sovereignty, and autonomy of RE sources. The emergence of individual and collective forms of prosumerism and the rapid diffusion of blockchain is expected to play a critical role in the decarbonization and democratization of energy systems. However, there is a ‘regulatory void’ relating to individual and collective forms of prosumerism that could prevent the rapid deployment of blockchain systems and potentially stagnate the operationalization of blockchain-enabled energy sharing and trading activities. The application of broad and facile regulatory fixes may be insufficient to address the major regulatory gaps. First, to the authors’ best knowledge, the concepts and elements circumjacent to individual and collective forms of prosumerism have not been adequately described in the legal frameworks of many countries. Second, there is a lack of legal certainty regarding the creation and adaptation of business models in a highly regulated and centralized energy system, which inhibits the emergence of prosumer-driven niche markets. There are also current and prospective challenges relating to the legal status of blockchain-based platforms for facilitating energy transactions, anticipated with the diffusion of blockchain technology. With the rise of prosumerism in the energy sector, the areas of (a) network charges, (b) energy market access, (c) incentive schemes, (d) taxes and levies, and (e) licensing requirements are still uncharted territories in many countries. The uncertainties emanating from this area pose a significant hurdle to the widespread adoption of blockchain technology, a complementary technology that offers added value and competitive advantages for energy systems. The authors undertake a conceptual and theoretical investigation to elucidate the lack of consensus, consistency, and legal certainty in the study of blockchain-based prosumerism. In addition, the authors set an exploratory tone to the discussion by taking an analytically eclectic approach that builds on multiple sources and theories to delve deeper into this topic. As an interdisciplinary study, this research accounts for the convergence of regulation, technology, and the energy sector. The study primarily adopts desk research, which examines regulatory frameworks and conceptual models for crucial policies at the international level to foster an all-inclusive discussion. With their reflections and insights into the interaction of blockchain and prosumerism in the energy sector, the authors do not aim to develop definitive regulatory models or instrument designs, but to contribute to the theoretical dialogue to navigate seminal issues and explore different nuances and pathways. Given the emergence of blockchain-based energy prosumerism, identifying the challenges, gaps and fragmentation of governance regimes is key to facilitating global regulatory transitions.

Keywords: blockchain technology, energy sector, prosumer, legal and regulatory.

Procedia PDF Downloads 181
249 Design and Construction of a Home-Based, Patient-Led, Therapeutic, Post-Stroke Recovery System Using Iterative Learning Control

Authors: Marco Frieslaar, Bing Chu, Eric Rogers

Abstract:

Stroke is a devastating illness that is the second biggest cause of death in the world (after heart disease). Where it does not kill, it leaves survivors with debilitating sensory and physical impairments that not only seriously harm their quality of life, but also cause a high incidence of severe depression. It is widely accepted that early intervention is essential for recovery, but current rehabilitation techniques largely favor hospital-based therapies which have restricted access, expensive and specialist equipment and tend to side-step the emotional challenges. In addition, there is insufficient funding available to provide the long-term assistance that is required. As a consequence, recovery rates are poor. The relatively unexplored solution is to develop therapies that can be harnessed in the home and are formulated from technologies that already exist in everyday life. This would empower individuals to take control of their own improvement and provide choice in terms of when and where they feel best able to undertake their own healing. This research seeks to identify how effective post-stroke, rehabilitation therapy can be applied to upper limb mobility, within the physical context of a home rather than a hospital. This is being achieved through the design and construction of an automation scheme, based on iterative learning control and the Riener muscle model, that has the ability to adapt to the user and react to their level of fatigue and provide tangible physical recovery. It utilizes a SMART Phone and laptop to construct an iterative learning control (ILC) system, that monitors upper arm movement in three dimensions, as a series of exercises are undertaken. The equipment generates functional electrical stimulation to assist in muscle activation and thus improve directional accuracy. In addition, it monitors speed, accuracy, areas of motion weakness and similar parameters to create a performance index that can be compared over time and extrapolated to establish an independent and objective assessment scheme, plus an approximate estimation of predicted final outcome. To further extend its assessment capabilities, nerve conduction velocity readings are taken by the software, between the shoulder and hand muscles. This is utilized to measure the speed of response of neuron signal transfer along the arm and over time, an online indication of regeneration levels can be obtained. This will prove whether or not sufficient training intensity is being achieved even before perceivable movement dexterity is observed. The device also provides the option to connect to other users, via the internet, so that the patient can avoid feelings of isolation and can undertake movement exercises together with others in a similar position. This should create benefits not only for the encouragement of rehabilitation participation, but also an emotional support network potential. It is intended that this approach will extend the availability of stroke recovery options, enable ease of access at a low cost, reduce susceptibility to depression and through these endeavors, enhance the overall recovery success rate.

Keywords: home-based therapy, iterative learning control, Riener muscle model, SMART phone, stroke rehabilitation

Procedia PDF Downloads 264
248 A Framework of Virtualized Software Controller for Smart Manufacturing

Authors: Pin Xiu Chen, Shang Liang Chen

Abstract:

A virtualized software controller is developed in this research to replace traditional hardware control units. This virtualized software controller transfers motion interpolation calculations from the motion control units of end devices to edge computing platforms, thereby reducing the end devices' computational load and hardware requirements and making maintenance and updates easier. The study also applies the concept of microservices, dividing the control system into several small functional modules and then deploy into a cloud data server. This reduces the interdependency among modules and enhances the overall system's flexibility and scalability. Finally, with containerization technology, the system can be deployed and started in a matter of seconds, which is more efficient than traditional virtual machine deployment methods. Furthermore, this virtualized software controller communicates with end control devices via wireless networks, making the placement of production equipment or the redesign of processes more flexible and no longer limited by physical wiring. To handle the large data flow and maintain low-latency transmission, this study integrates 5G technology, fully utilizing its high speed, wide bandwidth, and low latency features to achieve rapid and stable remote machine control. An experimental setup is designed to verify the feasibility and test the performance of this framework. This study designs a smart manufacturing site with a 5G communication architecture, serving as a field for experimental data collection and performance testing. The smart manufacturing site includes one robotic arm, three Computer Numerical Control machine tools, several Input/Output ports, and an edge computing architecture. All machinery information is uploaded to edge computing servers and cloud servers via 5G communication and the Internet of Things framework. After analysis and computation, this information is converted into motion control commands, which are transmitted back to the relevant machinery for motion control through 5G communication. The communication time intervals at each stage are calculated using the C++ chrono library to measure the time difference for each command transmission. The relevant test results will be organized and displayed in the full-text.

Keywords: 5G, MEC, microservices, virtualized software controller, smart manufacturing

Procedia PDF Downloads 82
247 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip

Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas

Abstract:

A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.

Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration

Procedia PDF Downloads 387
246 Comprehensive Geriatric Assessments: An Audit into Assessing and Improving Uptake on Geriatric Wards at King’s College Hospital, London

Authors: Michael Adebayo, Saheed Lawal

Abstract:

The Comprehensive Geriatric Assessment (CGA) is the multidimensional tool used to assess elderly, frail patients either on admission to hospital care or at a community level in primary care. It is a tool designed with the aim of using a holistic approach to managing patients. A Cochrane review of CGA use in 2011 found that the likelihood of being alive and living in their own home rises by 30% post-discharge. RCTs have also discovered 10–15% reductions in readmission rates and reductions in institutionalization, and resource use and costs. Past audit cycles at King’s College Hospital, Denmark Hill had shown inconsistent evidence of CGA completion inpatient discharge summaries (less than 50%). Junior Doctors in the Health and Ageing (HAU) wards have struggled to sustain the efforts of past audit cycles due to the quick turnover in staff (four-month placements for trainees). This 7th cycle created a multi-faceted approach to solving this problem amongst staff and creating lasting change. Methods: 1. We adopted multidisciplinary team involvement to support Doctors. MDT staff e.g. Nurses, Physiotherapists, Occupational Therapists and Dieticians, were actively encouraged to fill in the CGA document. 2. We added a CGA Document Pro-forma to “Sunrise EPR” (Trust computer system). These CGAs were to automatically be included the discharge summary. 3. Prior to assessing uptake, we used a spot audit questionnaire to assess staff awareness/knowledge of what a CGA was. 4. We designed and placed posters highlighting domains of CGA and MDT roles suited to each domain on geriatric “Health and Ageing Wards” (HAU) in the hospital. 5. We performed an audit of % discharge summaries which include CGA and MDT role input. 6. We nominated ward champions on each ward from each multidisciplinary specialty to monitor and encourage colleagues to actively complete CGAs. 7. We initiated further education of ward staff on CGA's importance by discussion at board rounds and weekly multidisciplinary meetings. Outcomes: 1. The majority of respondents to our spot audit were aware of what a CGA was, but fewer had used the EPR document to complete one. 2. We found that CGAs were not being commenced for nearly 50% of patients discharged on HAU wards and the Frailty Assessment Unit.

Keywords: comprehensive geriatric assessment, CGA, multidisciplinary team, quality of life, mortality

Procedia PDF Downloads 84
245 The Underground Ecosystem of Credit Card Frauds

Authors: Abhinav Singh

Abstract:

Point Of Sale (POS) malwares have been stealing the limelight this year. They have been the elemental factor in some of the biggest breaches uncovered in past couple of years. Some of them include • Target: A Retail Giant reported close to 40 million credit card data being stolen • Home Depot : A home product Retailer reported breach of close to 50 million credit records • Kmart: A US retailer recently announced breach of 800 thousand credit card details. Alone in 2014, there have been reports of over 15 major breaches of payment systems around the globe. Memory scrapping malwares infecting the point of sale devices have been the lethal weapon used in these attacks. These malwares are capable of reading the payment information from the payment device memory before they are being encrypted. Later on these malwares send the stolen details to its parent server. These malwares are capable of recording all the critical payment information like the card number, security number, owner etc. All these information are delivered in raw format. This Talk will cover the aspects of what happens after these details have been sent to the malware authors. The entire ecosystem of credit card frauds can be broadly classified into these three steps: • Purchase of raw details and dumps • Converting them to plastic cash/cards • Shop! Shop! Shop! The focus of this talk will be on the above mentioned points and how they form an organized network of cyber-crime. The first step involves buying and selling of the stolen details. The key point to emphasize are : • How is this raw information been sold in the underground market • The buyer and seller anatomy • Building your shopping cart and preferences • The importance of reputation and vouches • Customer support and replace/refunds These are some of the key points that will be discussed. But the story doesn’t end here. As of now the buyer only has the raw card information. How will this raw information be converted to plastic cash? Now comes in picture the second part of this underground economy where-in these raw details are converted into actual cards. There are well organized services running underground that can help you in converting these details into plastic cards. We will discuss about this technique in detail. At last, the final step involves shopping with the stolen cards. The cards generated with the stolen details can be easily used to swipe-and-pay for purchased goods at different retail shops. Usually these purchases are of expensive items that have good resale value. Apart from using the cards at stores, there are underground services that lets you deliver online orders to their dummy addresses. Once the package is received it will be delivered to the original buyer. These services charge based on the value of item that is being delivered. The overall underground ecosystem of credit card fraud works in a bulletproof way and it involves people working in close groups and making heavy profits. This is a brief summary of what I plan to present at the talk. I have done an extensive research and have collected good deal of material to present as samples. Some of them include: • List of underground forums • Credit card dumps • IRC chats among these groups • Personal chat with big card sellers • Inside view of these forum owners. The talk will be concluded by throwing light on how these breaches are being tracked during investigation. How are credit card breaches tracked down and what steps can financial institutions can build an incidence response over it.

Keywords: POS mawalre, credit card frauds, enterprise security, underground ecosystem

Procedia PDF Downloads 439
244 Rangeland Monitoring by Computerized Technologies

Authors: H. Arzani, Z. Arzani

Abstract:

Every piece of rangeland has a different set of physical and biological characteristics. This requires the manager to synthesis various information for regular monitoring to define changes trend to get wright decision for sustainable management. So range managers need to use computerized technologies to monitor rangeland, and select. The best management practices. There are four examples of computerized technologies that can benefit sustainable management: (1) Photographic method for cover measurement: The method was tested in different vegetation communities in semi humid and arid regions. Interpretation of pictures of quadrats was done using Arc View software. Data analysis was done by SPSS software using paired t test. Based on the results, generally, photographic method can be used to measure ground cover in most vegetation communities. (2) GPS application for corresponding ground samples and satellite pixels: In two provinces of Tehran and Markazi, six reference points were selected and in each point, eight GPS models were tested. Significant relation among GPS model, time and location with accuracy of estimated coordinates was found. After selection of suitable method, in Markazi province coordinates of plots along four transects in each 6 sites of rangelands was recorded. The best time of GPS application was in the morning hours, Etrex Vista had less error than other models, and a significant relation among GPS model, time and location with accuracy of estimated coordinates was found. (3) Application of satellite data for rangeland monitoring: Focusing on the long term variation of vegetation parameters such as vegetation cover and production is essential. Our study in grass and shrub lands showed that there were significant correlations between quantitative vegetation characteristics and satellite data. So it is possible to monitor rangeland vegetation using digital data for sustainable utilization. (4) Rangeland suitability classification with GIS: Range suitability assessment can facilitate sustainable management planning. Three sub-models of sensitivity to erosion, water suitability and forage production out puts were entered to final range suitability classification model. GIS was facilitate classification of range suitability and produced suitability maps for sheep grazing. Generally digital computers assist range managers to interpret, modify, calibrate or integrating information for correct management.

Keywords: computer, GPS, GIS, remote sensing, photographic method, monitoring, rangeland ecosystem, management, suitability, sheep grazing

Procedia PDF Downloads 367
243 Future Research on the Resilience of Tehran’s Urban Areas Against Pandemic Crises Horizon 2050

Authors: Farzaneh Sasanpour, Saeed Amini Varaki

Abstract:

Resilience is an important goal for cities as urban areas face an increasing range of challenges in the 21st century; therefore, according to the characteristics of risks, adopting an approach that responds to sensitive conditions in the risk management process is the resilience of cities. In the meantime, most of the resilience assessments have dealt with natural hazards and less attention has been paid to pandemics.In the covid-19 pandemic, the country of Iran and especially the metropolis of Tehran, was not immune from the crisis caused by its effects and consequences and faced many challenges. One of the methods that can increase the resilience of Tehran's metropolis against possible crises in the future is future studies. This research is practical in terms of type. The general pattern of the research will be descriptive-analytical and from the point of view that it is trying to communicate between the components and provide urban resilience indicators with pandemic crises and explain the scenarios, its future studies method is exploratory. In order to extract and determine the key factors and driving forces effective on the resilience of Tehran's urban areas against pandemic crises (Covid-19), the method of structural analysis of mutual effects and Micmac software was used. Therefore, the primary factors and variables affecting the resilience of Tehran's urban areas were set in 5 main factors, including physical-infrastructural (transportation, spatial and physical organization, streets and roads, multi-purpose development) with 39 variables based on mutual effects analysis. Finally, key factors and variables in five main areas, including managerial-institutional with five variables; Technology (intelligence) with 3 variables; economic with 2 variables; socio-cultural with 3 variables; and physical infrastructure, were categorized with 7 variables. These factors and variables have been used as key factors and effective driving forces on the resilience of Tehran's urban areas against pandemic crises (Covid-19), in explaining and developing scenarios. In order to develop the scenarios for the resilience of Tehran's urban areas against pandemic crises (Covid-19), intuitive logic, scenario planning as one of the future research methods and the Global Business Network (GBN) model were used. Finally, four scenarios have been drawn and selected with a creative method using the metaphor of weather conditions, which is indicative of the general outline of the conditions of the metropolis of Tehran in that situation. Therefore, the scenarios of Tehran metropolis were obtained in the form of four scenarios: 1- solar scenario (optimal governance and management leading in smart technology) 2- cloud scenario (optimal governance and management following in intelligent technology) 3- dark scenario (optimal governance and management Unfavorable leader in intelligence technology) 4- Storm scenario (unfavorable governance and management of follower in intelligence technology). The solar scenario shows the best situation and the stormy scenario shows the worst situation for the Tehran metropolis. According to the findings obtained in this research, city managers can, in order to achieve a better tomorrow for the metropolis of Tehran, in all the factors and components of urban resilience against pandemic crises by using future research methods, a coherent picture with the long-term horizon of 2050, from the path Provide urban resilience movement and platforms for upgrading and increasing the capacity to deal with the crisis. To create the necessary platforms for the realization, development and evolution of the urban areas of Tehran in a way that guarantees long-term balance and stability in all dimensions and levels.

Keywords: future research, resilience, crisis, pandemic, covid-19, Tehran

Procedia PDF Downloads 67
242 Innovative Grafting of Polyvinylpyrrolidone onto Polybenzimidazole Proton Exchange Membranes for Enhanced High-Temperature Fuel Cell Performance

Authors: Zeyu Zhou, Ziyu Zhao, Xiaochen Yang, Ling AI, Heng Zhai, Stuart Holmes

Abstract:

As a promising sustainable alternative to traditional fossil fuels, fuel cell technology is highly favoured due to its enhanced working efficiency and reduced emissions. In the context of high-temperature fuel cells (operating above 100 °C), the most commonly used proton exchange membrane (PEM) is the Polybenzimidazole (PBI) doped phosphoric acid (PA) membrane. Grafting is a promising strategy to advance PA-doped PBI PEM technology. The existing grafting modification on PBI PEMs mainly focuses on grafting phosphate-containing or alkaline groups onto the PBI molecular chains. However, quaternary ammonium-based grafting approaches face a common challenge. To initiate the N-alkylation reaction, deacidifying agents such as NaH, NaOH, KOH, K2CO3, etc., can lead to ionic crosslinking between the quaternary ammonium group and PBI. Polyvinylpyrrolidone (PVP) is another widely used polymer, the N-heterocycle groups within PVP endow it with a significant ability to absorb PA. Recently, PVP has attracted substantial attention in the field of fuel cells due to its reduced environmental impact and impressive fuel cell performance. However, due to the the poor compatibility of PVP in PBI, few research apply PVP in PA-doped PBI PEMs. This work introduces an innovative strategy to graft PVP onto PBI to form a network-like polymer. Due to the absence of quaternary ammonium groups, PVP does not pose issues related to crosslinking with PBI. Moreover, the nitrogen-containing functional groups on PVP provide PBI with a robust phosphoric acid retention ability. The nuclear magnetic resonance (NMR) hydrogen spectrum analysis results indicate the successful completion of the grafting reaction where N-alkylation reactions happen on both sides of the grafting agent 1,4-bis(chloromethyl)benzene. On one side, the reaction takes place with the hydrogen atoms on the imidazole groups of PBI, while on the other side, it reacts with the terminal amino group of PVP. The XPS results provide additional evidence from the perspective of the element. On synthesized PBI-g-PVP surfaces, there is an absence of chlorine (chlorine in grafting agent 1,4-bis(chloromethyl)benzene is substituted) element but a presence of sulfur element (sulfur element in terminal amino PVP appears in PBI), which demonstrates the occurrence of the grafting reaction and PVP is successfully grafted onto PBI. Prepare these modified membranes into MEA. It was found that during the fuel cell operation, all the grafted membranes showed substantial improvement in maximum current density and peak power density compared to unmodified one. For PBI-g-PVP 30, with a grafting degree of 22.4%, the peak power density reaches 1312 mW cm⁻², marking a 59.6% enhancement compared to the pristine PBI membrane. The improvement is caused by the improved PA binding ability of the membrane after grafting. The AST test result shows that the grafting membranes have better long-term durability and performance than unmodified membranes attributed to the presence of added PA binding sites, which can effectively prevent the PA leaching caused by proton migration. In conclusion, the test results indicate that grafting PVP onto PBI is a promising strategy which can effectively improve the fuel cell performance.

Keywords: fuel cell, grafting modification, PA doping ability, PVP

Procedia PDF Downloads 79
241 Will My Home Remain My Castle? Tenants’ Interview Topics regarding an Eco-Friendly Refurbishment Strategy in a Neighborhood in Germany

Authors: Karin Schakib-Ekbatan, Annette Roser

Abstract:

According to the Federal Government’s plans, the German building stock should be virtually climate neutral by 2050. Thus, the “EnEff.Gebäude.2050” funding initiative was launched, complementing the projects of the Energy Transition Construction research initiative. Beyond the construction and renovation of individual buildings, solutions must be found at the neighborhood level. The subject of the presented pilot project is a building ensemble from the Wilhelminian period in Munich, which is planned to be refurbished based on a socially compatible, energy-saving, innovative-technical modernization concept. The building ensemble, with about 200 apartments, is part of the building cooperative. To create an optimized network and possible synergies between researchers and projects of the funding initiative, a Scientific Accompanying Research was established for cross-project analyses of findings and results in order to identify further research needs and trends. Thus, the project is characterized by an interdisciplinary approach that combines constructional, technical, and socio-scientific expertise based on a participatory understanding of research by involving the tenants at an early stage. The research focus is on getting insights into the tenants’ comfort requirements, attitudes, and energy-related behaviour. Both qualitative and quantitative methods are applied based on the Technology-Acceptance-Model (TAM). The core of the refurbishment strategy is a wall heating system intended to replace conventional radiators. A wall heating provides comfortable and consistent radiant heat instead of convection heat, which often causes drafts and dust turbulence. Besides comfort and health, the advantage of wall heating systems is an energy-saving operation. All apartments would be supplied by a uniform basic temperature control system (around perceived room temperature of 18 °C resp. 64,4 °F), which could be adapted to individual preferences via individual heating options (e. g. infrared heating). The new heating system would affect the furnishing of the walls, in terms of not allowing the wall surface to be covered too much with cupboards or pictures. Measurements and simulations of the energy consumption of an installed wall heating system are currently being carried out in a show apartment in this neighborhood to investigate energy-related, economical aspects as well as thermal comfort. In March, interviews were conducted with a total of 12 people in 10 households. The interviews were analyzed by MAXQDA. The main issue of the interview was the fear of reduced self-efficacy within their own walls (not having sufficient individual control over the room temperature or being very limited in furnishing). Other issues concerned the impact that the construction works might have on their daily life, such as noise or dirt. Despite their basically positive attitude towards a climate-friendly refurbishment concept, tenants were very concerned about the further development of the project and they expressed a great need for information events. The results of the interviews will be used for project-internal discussions on technical and psychological aspects of the refurbishment strategy in order to design accompanying workshops with the tenants as well as to prepare a written survey involving all households of the neighbourhood.

Keywords: energy efficiency, interviews, participation, refurbishment, residential buildings

Procedia PDF Downloads 126
240 Identification of a Lead Compound for Selective Inhibition of Nav1.7 to Treat Chronic Pain

Authors: Sharat Chandra, Zilong Wang, Ru-Rong Ji, Andrey Bortsov

Abstract:

Chronic pain (CP) therapeutic approaches have limited efficacy. As a result, doctors are prescribing opioids for chronic pain, leading to opioid overuse, abuse, and addiction epidemic. Therefore, the development of effective and safe CP drugs remains an unmet medical need. Voltage-gated sodium (Nav) channels act as cardiovascular and neurological disorder’s molecular targets. Nav channels selective inhibitors are hard to design because there are nine closely-related isoforms (Nav1.1-1.9) that share the protein sequence segments. We are targeting the Nav1.7 found in the peripheral nervous system and engaged in the perception of pain. The objective of this project was to screen a 1.5 million compound library for identification of inhibitors for Nav1.7 with analgesic effect. In this study, we designed a protocol for identification of isoform-selective inhibitors of Nav1.7, by utilizing the prior information on isoform-selective antagonists. First, a similarity search was performed; then the identified hits were docked into a binding site on the fourth voltage-sensor domain (VSD4) of Nav1.7. We used the FTrees tool for similarity searching and library generation; the generated library was docked in the VSD4 domain binding site using FlexX and compounds were shortlisted using a FlexX score and SeeSAR hyde scoring. Finally, the top 25 compounds were tested with molecular dynamics simulation (MDS). We reduced our list to 9 compounds based on the MDS root mean square deviation plot and obtained them from a vendor for in vitro and in vivo validation. Whole-cell patch-clamp recordings in HEK-293 cells and dorsal root ganglion neurons were conducted. We used patch pipettes to record transient Na⁺ currents. One of the compounds reduced the peak sodium currents in Nav1.7-HEK-293 stable cell line in a dose-dependent manner, with IC50 values at 0.74 µM. In summary, our computer-aided analgesic discovery approach allowed us to develop pre-clinical analgesic candidate with significant reduction of time and cost.

Keywords: chronic pain, voltage-gated sodium channel, isoform-selective antagonist, similarity search, virtual screening, analgesics development

Procedia PDF Downloads 123
239 The Processing of Implicit Stereotypes in Contexts of Reading, Using Eye-Tracking and Self-Paced Reading Tasks

Authors: Magali Mari, Misha Muller

Abstract:

The present study’s objectives were to determine how diverse implicit stereotypes affect the processing of written information and linguistic inferential processes, such as presupposition accommodation. When reading a text, one constructs a representation of the described situation, which is then updated, according to new outputs and based on stereotypes inscribed within society. If the new output contradicts stereotypical expectations, the representation must be corrected, resulting in longer reading times. A similar process occurs in cases of linguistic inferential processes like presupposition accommodation. Presupposition accommodation is traditionally regarded as fast, automatic processing of background information (e.g., ‘Mary stopped eating meat’ is quickly processed as Mary used to eat meat). However, very few accounts have investigated if this process is likely to be influenced by domains of social cognition, such as implicit stereotypes. To study the effects of implicit stereotypes on presupposition accommodation, adults were recorded while they read sentences in French, combining two methods, an eye-tracking task and a classic self-paced reading task (where participants read sentence segments at their own pace by pressing a computer key). In one condition, presuppositions were activated with the French definite articles ‘le/la/les,’ whereas in the other condition, the French indefinite articles ‘un/une/des’ was used, triggering no presupposition. Using a definite article presupposes that the object has already been uttered and is thus part of background information, whereas using an indefinite article is understood as the introduction of new information. Two types of stereotypes were under examination in order to enlarge the scope of stereotypes traditionally analyzed. Study 1 investigated gender stereotypes linked to professional occupations to replicate previous findings. Study 2 focused on nationality-related stereotypes (e.g. ‘the French are seducers’ versus ‘the Japanese are seducers’) to determine if the effects of implicit stereotypes on reading are generalizable to other types of implicit stereotypes. The results show that reading is influenced by the two types of implicit stereotypes; in the two studies, the reading pace slowed down when a counter-stereotype was presented. However, presupposition accommodation did not affect participants’ processing of information. Altogether these results show that (a) implicit stereotypes affect the processing of written information, regardless of the type of stereotypes presented, and (b) that implicit stereotypes prevail over the superficial linguistic treatment of presuppositions, which suggests faster processing for treating social information compared to linguistic information.

Keywords: eye-tracking, implicit stereotypes, reading, social cognition

Procedia PDF Downloads 198
238 Deep-Learning Coupled with Pragmatic Categorization Method to Classify the Urban Environment of the Developing World

Authors: Qianwei Cheng, A. K. M. Mahbubur Rahman, Anis Sarker, Abu Bakar Siddik Nayem, Ovi Paul, Amin Ahsan Ali, M. Ashraful Amin, Ryosuke Shibasaki, Moinul Zaber

Abstract:

Thomas Friedman, in his famous book, argued that the world in this 21st century is flat and will continue to be flatter. This is attributed to rapid globalization and the interdependence of humanity that engendered tremendous in-flow of human migration towards the urban spaces. In order to keep the urban environment sustainable, policy makers need to plan based on extensive analysis of the urban environment. With the advent of high definition satellite images, high resolution data, computational methods such as deep neural network analysis, and hardware capable of high-speed analysis; urban planning is seeing a paradigm shift. Legacy data on urban environments are now being complemented with high-volume, high-frequency data. However, the first step of understanding urban space lies in useful categorization of the space that is usable for data collection, analysis, and visualization. In this paper, we propose a pragmatic categorization method that is readily usable for machine analysis and show applicability of the methodology on a developing world setting. Categorization to plan sustainable urban spaces should encompass the buildings and their surroundings. However, the state-of-the-art is mostly dominated by classification of building structures, building types, etc. and largely represents the developed world. Hence, these methods and models are not sufficient for developing countries such as Bangladesh, where the surrounding environment is crucial for the categorization. Moreover, these categorizations propose small-scale classifications, which give limited information, have poor scalability and are slow to compute in real time. Our proposed method is divided into two steps-categorization and automation. We categorize the urban area in terms of informal and formal spaces and take the surrounding environment into account. 50 km × 50 km Google Earth image of Dhaka, Bangladesh was visually annotated and categorized by an expert and consequently a map was drawn. The categorization is based broadly on two dimensions-the state of urbanization and the architectural form of urban environment. Consequently, the urban space is divided into four categories: 1) highly informal area; 2) moderately informal area; 3) moderately formal area; and 4) highly formal area. In total, sixteen sub-categories were identified. For semantic segmentation and automatic categorization, Google’s DeeplabV3+ model was used. The model uses Atrous convolution operation to analyze different layers of texture and shape. This allows us to enlarge the field of view of the filters to incorporate larger context. Image encompassing 70% of the urban space was used to train the model, and the remaining 30% was used for testing and validation. The model is able to segment with 75% accuracy and 60% Mean Intersection over Union (mIoU). In this paper, we propose a pragmatic categorization method that is readily applicable for automatic use in both developing and developed world context. The method can be augmented for real-time socio-economic comparative analysis among cities. It can be an essential tool for the policy makers to plan future sustainable urban spaces.

Keywords: semantic segmentation, urban environment, deep learning, urban building, classification

Procedia PDF Downloads 191
237 Intensive Neurophysiological Rehabilitation System: New Approach for Treatment of Children with Autism

Authors: V. I. Kozyavkin, L. F. Shestopalova, T. B. Voloshyn

Abstract:

Introduction: Rehabilitation of children with Autism is the issue of the day in psychiatry and neurology. It is attributed to constantly increasing quantity of autistic children - Autistic Spectrum Disorders (ASD) Existing rehabilitation approaches in treatment of children with Autism improve their medico- social and social- psychological adjustment. Experience of treatment for different kinds of Autistic disorders in International Clinic of Rehabilitation (ICR) reveals the necessity of complex intensive approach for healing this malady and wider implementation of a Kozyavkin method for treatment of children with ASD. Methods: 19 children aged from 3 to 14 years were examined. They were diagnosed ‘Autism’ (F84.0) with comorbid neurological pathology (from pyramidal insufficiency to para- and tetraplegia). All patients underwent rehabilitation in ICR during two weeks, where INRS approach was used. INRS included methods like biomechanical correction of the spine, massage, physical therapy, joint mobilization, wax-paraffin applications. They were supplemented by art- therapy, ergotherapy, rhythmical group exercises, computer game therapy, team Olympic games and other methods for improvement of motivation and social integration of the child. Estimation of efficacy was conducted using parent’s questioning and done twice- on the onset of INRS rehabilitation course and two weeks afterward. For efficacy assessment of rehabilitation of autistic children in ICR standardized tool was used, namely Autism Treatment Evaluation Checklist (ATEC). This scale was selected because any rehabilitation approaches for the child with Autism can be assessed using it. Results: Before the onset of INRS treatment mean score according to ATEC scale was 64,75±9,23, it reveals occurrence in examined children severe communication, speech, socialization and behavioral impairments. After the end of the rehabilitation course, the mean score was 56,5±6,7, what indicates positive dynamics in comparison to the onset of rehabilitation. Generally, improvement of psychoemotional state occurred in 90% of cases. Most significant changes occurred in the scope of speech (16,5 before and 14,5 after the treatment), socialization (15.1 before and 12,5 after) and behavior (20,1 before and 17.4 after). Conclusion: As a result of INRS rehabilitation course reduction of autistic symptoms was noted. Particularly improvements in speech were observed (children began to spell out new syllables, words), there was some decrease in signs of destructiveness, quality of contact with the surrounding people improved, new skills of self-service appeared. The prospect of the study is further, according to evidence- based medicine standards, deeper examination of INRS and assessment of its usefulness in treatment for Autism and ASD.

Keywords: intensive neurophysiological rehabilitation system (INRS), international clinic od rehabilitation, ASD, rehabilitation

Procedia PDF Downloads 169
236 Comparison of Spiking Neuron Models in Terms of Biological Neuron Behaviours

Authors: Fikret Yalcinkaya, Hamza Unsal

Abstract:

To understand how neurons work, it is required to combine experimental studies on neural science with numerical simulations of neuron models in a computer environment. In this regard, the simplicity and applicability of spiking neuron modeling functions have been of great interest in computational neuron science and numerical neuroscience in recent years. Spiking neuron models can be classified by exhibiting various neuronal behaviors, such as spiking and bursting. These classifications are important for researchers working on theoretical neuroscience. In this paper, three different spiking neuron models; Izhikevich, Adaptive Exponential Integrate Fire (AEIF) and Hindmarsh Rose (HR), which are based on first order differential equations, are discussed and compared. First, the physical meanings, derivatives, and differential equations of each model are provided and simulated in the Matlab environment. Then, by selecting appropriate parameters, the models were visually examined in the Matlab environment and it was aimed to demonstrate which model can simulate well-known biological neuron behaviours such as Tonic Spiking, Tonic Bursting, Mixed Mode Firing, Spike Frequency Adaptation, Resonator and Integrator. As a result, the Izhikevich model has been shown to perform Regular Spiking, Continuous Explosion, Intrinsically Bursting, Thalmo Cortical, Low-Threshold Spiking and Resonator. The Adaptive Exponential Integrate Fire model has been able to produce firing patterns such as Regular Ignition, Adaptive Ignition, Initially Explosive Ignition, Regular Explosive Ignition, Delayed Ignition, Delayed Regular Explosive Ignition, Temporary Ignition and Irregular Ignition. The Hindmarsh Rose model showed three different dynamic neuron behaviours; Spike, Burst and Chaotic. From these results, the Izhikevich cell model may be preferred due to its ability to reflect the true behavior of the nerve cell, the ability to produce different types of spikes, and the suitability for use in larger scale brain models. The most important reason for choosing the Adaptive Exponential Integrate Fire model is that it can create rich ignition patterns with fewer parameters. The chaotic behaviours of the Hindmarsh Rose neuron model, like some chaotic systems, is thought to be used in many scientific and engineering applications such as physics, secure communication and signal processing.

Keywords: Izhikevich, adaptive exponential integrate fire, Hindmarsh Rose, biological neuron behaviours, spiking neuron models

Procedia PDF Downloads 180
235 Improving Student Retention: Enhancing the First Year Experience through Group Work, Research and Presentation Workshops

Authors: Eric Bates

Abstract:

Higher education is recognised as being of critical importance in Ireland and has been linked as a vital factor to national well-being. Statistics show that Ireland has one of the highest rates of higher education participation in Europe. However, student retention and progression, especially in Institutes of Technology, is becoming an issue as rates on non-completion rise. Both within Ireland and across Europe student retention is seen as a key performance indicator for higher education and with these increasing rates the Irish higher education system needs to be flexible and adapt to the situation it now faces. The author is a Programme Chair on a Level 6 full time undergraduate programme and experience to date has shown that the first year undergraduate students take some time to identify themselves as a group within the setting of a higher education institute. Despite being part of a distinct class on a specific programme some individuals can feel isolated as he or she take the first step into higher education. Such feelings can contribute to students eventually dropping out. This paper reports on an ongoing initiative that aims to accelerate the bonding experience of a distinct group of first year undergraduates on a programme which has a high rate of non-completion. This research sought to engage the students in dynamic interactions with their peers to quickly evolve a group sense of coherence. Two separate modules – a Research Module and a Communications module - delivered by the researcher were linked across two semesters. Students were allocated into random groups and each group was given a topic to be researched. There were six topics – essentially the six sub-headings on the DIT Graduate Attribute Statement. The research took place in a computer lab and students also used the library. The output from this was a document that formed part of the submission for the Research Module. In the second semester the groups then had to make a presentation of their findings where each student spoke for a minimum amount of time. Presentation workshops formed part of that module and students were given the opportunity to practice their presentation skills. These presentations were video recorded to enable feedback to be given. Although this was a small scale study preliminary results found a strong sense of coherence among this particular cohort and feedback from the students was very positive. Other findings indicate that spreading the initiative across two semesters may have been an inhibitor. Future challenges include spreading such Initiatives College wide and indeed sector wide.

Keywords: first year experience, student retention, group work, presentation workshops

Procedia PDF Downloads 228
234 Examining Employee Social Intrapreneurial Behaviour (ESIB) in Kuwait: Pilot Study

Authors: Ardita Malaj, Ahmad R. Alsaber, Bedour Alboloushi, Anwaar Alkandari

Abstract:

Organizations worldwide, particularly in Kuwait, are concerned with implementing a progressive workplace culture and fostering social innovation behaviours. The main aim of this research is to examine and establish a thorough comprehension of the relationship between an inventive organizational culture, employee intrapreneurial behaviour, authentic leadership, employee job satisfaction, and employee job commitment in the manufacturing sector of Kuwait, which is a developed economy. Literature reviews analyse the core concepts and their related areas by scrutinizing their definitions, dimensions, and importance to uncover any deficiencies in existing research. The examination of relevant research uncovered major gaps in understanding. This study examines the reliability and validity of a newly developed questionnaire designed to identify the appropriate applications for a large-scale investigation. A preliminary investigation was carried out, determining a sample size of 36 respondents selected randomly from a pool of 223 samples. SPSS was utilized to calculate the percentages of the demographic characteristics for the participants, assess the credibility of the measurements, evaluate the internal consistency, validate all agreements, and determine Pearson's correlation. The study's results indicated that the majority of participants were male (66.7%), aged between 35 and 44 (38.9%), and possessed a bachelor's degree (58.3%). Approximately 94.4% of the participants were employed full-time. 72.2% of the participants are employed in the electrical, computer, and ICT sector, whilst 8.3% work in the metal industry. Out of all the departments, the human resource department had the highest level of engagement, making up 13.9% of the total. Most participants (36.1%) possessed intermediate or advanced levels of experience, whilst 21% were classified as entry-level. Furthermore, 8.3% of individuals were categorized as first-level management, 22.2% were categorized as middle management, and 16.7% were categorized as executive or senior management. Around 19.4% of the participants have over a decade of professional experience. The Pearson's correlation coefficient for all 5 components varies between 0.4009 to 0.7183. The results indicate that all elements of the questionnaire were effectively verified, with a Cronbach alpha factor predominantly exceeding 0.6, which is the criterion commonly accepted by researchers. Therefore, the work on the larger scope of testing and analysis could continue.

Keywords: pilot study, ESIB, innovative organizational culture, Kuwait, validation

Procedia PDF Downloads 32
233 Gis Based Flash Flood Runoff Simulation Model of Upper Teesta River Besin - Using Aster Dem and Meteorological Data

Authors: Abhisek Chakrabarty, Subhraprakash Mandal

Abstract:

Flash flood is one of the catastrophic natural hazards in the mountainous region of India. The recent flood in the Mandakini River in Kedarnath (14-17th June, 2013) is a classic example of flash floods that devastated Uttarakhand by killing thousands of people.The disaster was an integrated effect of high intensityrainfall, sudden breach of Chorabari Lake and very steep topography. Every year in Himalayan Region flash flood occur due to intense rainfall over a short period of time, cloud burst, glacial lake outburst and collapse of artificial check dam that cause high flow of river water. In Sikkim-Derjeeling Himalaya one of the probable flash flood occurrence zone is Teesta Watershed. The Teesta River is a right tributary of the Brahmaputra with draining mountain area of approximately 8600 Sq. km. It originates in the Pauhunri massif (7127 m). The total length of the mountain section of the river amounts to 182 km. The Teesta is characterized by a complex hydrological regime. The river is fed not only by precipitation, but also by melting glaciers and snow as well as groundwater. The present study describes an attempt to model surface runoff in upper Teesta basin, which is directly related to catastrophic flood events, by creating a system based on GIS technology. The main object was to construct a direct unit hydrograph for an excess rainfall by estimating the stream flow response at the outlet of a watershed. Specifically, the methodology was based on the creation of a spatial database in GIS environment and on data editing. Moreover, rainfall time-series data collected from Indian Meteorological Department and they were processed in order to calculate flow time and the runoff volume. Apart from the meteorological data, background data such as topography, drainage network, land cover and geological data were also collected. Clipping the watershed from the entire area and the streamline generation for Teesta watershed were done and cross-sectional profiles plotted across the river at various locations from Aster DEM data using the ERDAS IMAGINE 9.0 and Arc GIS 10.0 software. The analysis of different hydraulic model to detect flash flood probability ware done using HEC-RAS, Flow-2D, HEC-HMS Software, which were of great importance in order to achieve the final result. With an input rainfall intensity above 400 mm per day for three days the flood runoff simulation models shows outbursts of lakes and check dam individually or in combination with run-off causing severe damage to the downstream settlements. Model output shows that 313 Sq. km area were found to be most vulnerable to flash flood includes Melli, Jourthang, Chungthang, and Lachung and 655sq. km. as moderately vulnerable includes Rangpo,Yathang, Dambung,Bardang, Singtam, Teesta Bazarand Thangu Valley. The model was validated by inserting the rain fall data of a flood event took place in August 1968, and 78% of the actual area flooded reflected in the output of the model. Lastly preventive and curative measures were suggested to reduce the losses by probable flash flood event.

Keywords: flash flood, GIS, runoff, simulation model, Teesta river basin

Procedia PDF Downloads 317
232 The Effect of Using Universal Design for Learning to Improve the Quality of Vocational Programme with Intellectual Disabilities and the Challenges Facing This Method from the Teachers' Point of View

Authors: Ohud Adnan Saffar

Abstract:

This study aims to know the effect of using universal design for learning (UDL) to improve the quality of vocational programme with intellectual disabilities (SID) and the challenges facing this method from the teachers' point of view. The significance of the study: There are comparatively few published studies on UDL in emerging nations. Therefore, this study will encourage the researchers to consider a new approaches teaching. Development of this study will contribute significant information on the cognitively disabled community on a universal scope. In order to collect and evaluate the data and for the verification of the results, this study has been used the mixed research method, by using two groups comparison method. To answer the study questions, we were used the questionnaire, lists of observations, open questions, and pre and post-test. Thus, the study explored the advantages and drawbacks, and know about the impact of using the UDL method on integrating SID with students non-special education needs in the same classroom. Those aims were realized by developing a workshop to explain the three principles of the UDL and train (16) teachers in how to apply this method to teach (12) students non-special education needs and the (12) SID in the same classroom, then take their opinion by using the questionnaire and questions. Finally, this research will explore the effects of the UDL on the teaching of professional photography skills for the SID in Saudi Arabia. To achieve this goal, the research method was a comparison of the performance of the SID using the UDL method with that of female students with the same challenges applying other strategies by teachers in control and experiment groups, we used the observation lists, pre and post-test. Initial results: It is clear from the previous response to the participants that most of the answers confirmed that the use of UDL achieves the principle of inclusion between the SID and students non-special education needs by 93.8%. In addition, the results show that the majority of the sampled people see that the most important advantages of using UDL in teaching are creating an interactive environment with using new and various teaching methods, with a percentage of 56.2%. Following this result, the UDL is useful for integrating students with general education, with a percentage of 31.2%. Moreover, the finding indicates to improve understanding through using the new technology and exchanging the primitive ways of teaching with the new ones, with a percentage of 25%. The result shows the percentages of the sampled people's opinions about the financial obstacles, and it concluded that the majority see that the cost is high and there is no computer maintenance available, with 50%. There are no smart devices in schools to help in implementing and applying for the program, with a percentage of 43.8%.

Keywords: universal design for learning, intellectual disabilities, vocational programme, the challenges facing this method

Procedia PDF Downloads 129
231 Pushover Analysis of Masonry Infilled Reinforced Concrete Frames for Performance Based Design for near Field Earthquakes

Authors: Alok Madan, Ashok Gupta, Arshad K. Hashmi

Abstract:

Non-linear dynamic time history analysis is considered as the most advanced and comprehensive analytical method for evaluating the seismic response and performance of multi-degree-of-freedom building structures under the influence of earthquake ground motions. However, effective and accurate application of the method requires the implementation of advanced hysteretic constitutive models of the various structural components including masonry infill panels. Sophisticated computational research tools that incorporate realistic hysteresis models for non-linear dynamic time-history analysis are not popular among the professional engineers as they are not only difficult to access but also complex and time-consuming to use. And, commercial computer programs for structural analysis and design that are acceptable to practicing engineers do not generally integrate advanced hysteretic models which can accurately simulate the hysteresis behavior of structural elements with a realistic representation of strength degradation, stiffness deterioration, energy dissipation and ‘pinching’ under cyclic load reversals in the inelastic range of behavior. In this scenario, push-over or non-linear static analysis methods have gained significant popularity, as they can be employed to assess the seismic performance of building structures while avoiding the complexities and difficulties associated with non-linear dynamic time-history analysis. “Push-over” or non-linear static analysis offers a practical and efficient alternative to non-linear dynamic time-history analysis for rationally evaluating the seismic demands. The present paper is based on the analytical investigation of the effect of distribution of masonry infill panels over the elevation of planar masonry infilled reinforced concrete (R/C) frames on the seismic demands using the capacity spectrum procedures implementing nonlinear static analysis (pushover analysis) in conjunction with the response spectrum concept. An important objective of the present study is to numerically evaluate the adequacy of the capacity spectrum method using pushover analysis for performance based design of masonry infilled R/C frames for near-field earthquake ground motions.

Keywords: nonlinear analysis, capacity spectrum method, response spectrum, seismic demand, near-field earthquakes

Procedia PDF Downloads 403
230 Predicting Personality and Psychological Distress Using Natural Language Processing

Authors: Jihee Jang, Seowon Yoon, Gaeun Son, Minjung Kang, Joon Yeon Choeh, Kee-Hong Choi

Abstract:

Background: Self-report multiple choice questionnaires have been widely utilized to quantitatively measure one’s personality and psychological constructs. Despite several strengths (e.g., brevity and utility), self-report multiple-choice questionnaires have considerable limitations in nature. With the rise of machine learning (ML) and Natural language processing (NLP), researchers in the field of psychology are widely adopting NLP to assess psychological constructs to predict human behaviors. However, there is a lack of connections between the work being performed in computer science and that psychology due to small data sets and unvalidated modeling practices. Aims: The current article introduces the study method and procedure of phase II, which includes the interview questions for the five-factor model (FFM) of personality developed in phase I. This study aims to develop the interview (semi-structured) and open-ended questions for the FFM-based personality assessments, specifically designed with experts in the field of clinical and personality psychology (phase 1), and to collect the personality-related text data using the interview questions and self-report measures on personality and psychological distress (phase 2). The purpose of the study includes examining the relationship between natural language data obtained from the interview questions, measuring the FFM personality constructs, and psychological distress to demonstrate the validity of the natural language-based personality prediction. Methods: The phase I (pilot) study was conducted on fifty-nine native Korean adults to acquire the personality-related text data from the interview (semi-structured) and open-ended questions based on the FFM of personality. The interview questions were revised and finalized with the feedback from the external expert committee, consisting of personality and clinical psychologists. Based on the established interview questions, a total of 425 Korean adults were recruited using a convenience sampling method via an online survey. The text data collected from interviews were analyzed using natural language processing. The results of the online survey, including demographic data, depression, anxiety, and personality inventories, were analyzed together in the model to predict individuals’ FFM of personality and the level of psychological distress (phase 2).

Keywords: personality prediction, psychological distress prediction, natural language processing, machine learning, the five-factor model of personality

Procedia PDF Downloads 79
229 Voices of Dissent: Case Study of a Digital Archive of Testimonies of Political Oppression

Authors: Andrea Scapolo, Zaya Rustamova, Arturo Matute Castro

Abstract:

The “Voices in Dissent” initiative aims at collecting and making available in a digital format, testimonies, letters, and other narratives produced by victims of political oppression from different geographical spaces across the Atlantic. By recovering silenced voices behind the official narratives, this open-access online database will provide indispensable tools for rewriting the history of authoritarian regimes from the margins as memory debates continue to provoke controversy among academic and popular transnational circles. In providing an extensive database of non-hegemonic discourses in a variety of political and social contexts, the project will complement the existing European and Latin-American studies, and invite further interdisciplinary and trans-national research. This digital resource will be available to academic communities and the general audience and will be organized geographically and chronologically. “Voices in Dissent” will offer a first comprehensive study of these personal accounts of persecution and repression against determined historical backgrounds and their impact on collective memory formation in contemporary societies. The digitalization of these texts will allow to run metadata analyses and adopt comparatist approaches for a broad range of research endeavors. Most of the testimonies included in our archive are testimonies of trauma: the trauma of exile, imprisonment, torture, humiliation, censorship. The research on trauma has now reached critical mass and offers a broad spectrum of critical perspectives. By putting together testimonies from different geographical and historical contexts, our project will provide readers and scholars with an extraordinary opportunity to investigate how culture shapes individual and collective memories and provides or denies resources to make sense and cope with the trauma. For scholars dealing with the epistemological and rhetorical analysis of testimonies, an online open-access archive will prove particularly beneficial to test theories on truth status and the formation of belief as well as to study the articulation of discourse. An important aspect of this project is also its pedagogical applications since it will contribute to the creation of Open Educational Resources (OER) to support students and educators worldwide. Through collaborations with our Library System, the archive will form part of the Digital Commons database. The texts collected in this online archive will be made available in the original languages as well as in English translation. They will be accompanied by a critical apparatus that will contextualize them historically by providing relevant background information and bibliographical references. All these materials can serve as a springboard for a broad variety of educational projects and classroom activities. They can also be used to design specific content courses or modules. In conclusion, the desirable outcomes of the “Voices in Dissent” project are: 1. the collections and digitalization of political dissent testimonies; 2. the building of a network of scholars, educators, and learners involved in the design, development, and sustainability of the digital archive; 3. the integration of the content of the archive in both research and teaching endeavors, such as publication of scholarly articles, design of new upper-level courses, and integration of the materials in existing courses.

Keywords: digital archive, dissent, open educational resources, testimonies, transatlantic studies

Procedia PDF Downloads 106
228 Posts by Influencers Promoting Water Saving: The Impact of Distance and the Perception of Effectiveness on Behavior

Authors: Sancho-Esper Franco, Rodríguez Sánchez Carla, Sánchez Carolina, Orús-Sanclemente Carlos

Abstract:

Water scarcity is a reality that affects many regions of the world and is aggravated by climate change and population growth. Saving water has become an urgent need to ensure the sustainability of the planet and the survival of many communities, where youth and social networks play a key role in promoting responsible practices and adopting habits that contribute to environmental preservation. This study analyzes the persuasion capacity of messages designed to promote pro-environmental behaviors among youth. Specifically, it studies how the efficacy (effectiveness) of the response (personal response efficacy/effectiveness) and the perception of distance from the source of the message influence the water-saving behavior of the audience. To do so, two communication frameworks are combined. First, the Construal Level Theory, which is based on the concept of "psychological distance", that is, people, objects or events can be perceived as psychologically near or far, and this subjective distance (i.e., social, temporal, or spatial) determines their attitudes, emotions, and actions. This perceived distance can be social, temporal, or spatial. This research focuses on studying the spatial distance and social distance generated by cultural differences between influencers and their audience to understand how cultural distance can influence the persuasiveness of a message. Research on the effects of psychological distance between influencers-followers in the pro-environmental field is very limited, being relevant because people could learn specific behaviors suggested by opinion leaders such as influencers in social networks. Second, different approaches to behavioral change suggest that the perceived efficacy of a behavior can explain individual pro-environmental actions. People will be more likely to adopt a new behavior if they perceive that they are capable of performing it (efficacy belief) and that their behavior will effectively contribute to solving that problem (personal response efficacy). It is also important to study the different actors (social and individual) that are perceived as responsible for addressing environmental problems. Specifically, we analyze to what extent the belief individual’s water-saving actions are effective in solving the problem can influence water-saving behavior since this individual effectiveness increases people's sense of obligation and responsibility with the problem. However, in this regard, empirical evidence presents mixed results. Our study addresses the call for experimental studies manipulating different subtypes of response effectiveness to generate robust causal evidence. Based on all the above, this research analyzes whether cultural distance (local vs. international influencer) and the perception of effectiveness of behavior (personal response efficacy) (personal/individual vs. collective) affect the actual behavior and the intention to conserve water of social network users. An experiment of 2 (local influencer vs. international influencer) x 2 (effectiveness of individual vs. collective response) is designed and estimated. The results show that a message from a local influencer appealing to individual responsibility exerts greater influence on intention and actual water-saving behavior, given the cultural closeness between influencer-follower, and the appeal to individual responsibility increases the feeling of obligation to participate in pro-environmental actions. These results offer important implications for social marketing campaigns that seek to promote water conservation.

Keywords: social marketing, influencer, message framing, experiment, personal response efficacy, water saving

Procedia PDF Downloads 62
227 The Impact of the Macro-Level: Organizational Communication in Undergraduate Medical Education

Authors: Julie M. Novak, Simone K. Brennan, Lacey Brim

Abstract:

Undergraduate medical education (UME) curriculum notably addresses micro-level communications (e.g., patient-provider, intercultural, inter-professional), yet frequently under-examines the role and impact of organizational communication, a more macro-level. Organizational communication, however, functions as foundation and through systemic structures of an organization and thereby serves as hidden curriculum and influences learning experiences and outcomes. Yet, little available research exists fully examining how students experience organizational communication while in medical school. Extant literature and best practices provide insufficient guidance for UME programs, in particular. The purpose of this study was to map and examine current organizational communication systems and processes in a UME program. Employing a phenomenology-grounded and participatory approach, this study sought to understand the organizational communication system from medical students' perspective. The research team consisted of a core team and 13 medical student co-investigators. This research employed multiple methods, including focus groups, individual interviews, and two surveys (one reflective of focus group questions, the other requesting students to submit ‘examples’ of communications). To provide context for student responses, nonstudent participants (faculty, administrators, and staff) were sampled, as they too express concerns about communication. Over 400 students across all cohorts and 17 nonstudents participated. Data were iteratively analyzed and checked for triangulation. Findings reveal the complex nature of organizational communication and student-oriented communications. They reveal program-impactful strengths, weaknesses, gaps, and tensions and speak to the role of organizational communication practices influencing both climate and culture. With regard to communications, students receive multiple, simultaneous communications from multiple sources/channels, both formal (e.g., official email) and informal (e.g., social media). Students identified organizational strengths including the desire to improve student voice, and message frequency. They also identified weaknesses related to over-reliance on emails, numerous platforms with inconsistent utilization, incorrect information, insufficient transparency, assessment/input fatigue, tacit expectations, scheduling/deadlines, responsiveness, and mental health confidentiality concerns. Moreover, they noted gaps related to lack of coordination/organization, ambiguous point-persons, student ‘voice-only’, open communication loops, lack of core centralization and consistency, and mental health bridges. Findings also revealed organizational identity and cultural characteristics as impactful on the medical school experience. Cultural characteristics included program size, diversity, urban setting, student organizations, community-engagement, crisis framing, learning for exams, inefficient bureaucracy, and professionalism. Moreover, they identified system structures that do not always leverage cultural strengths or reduce cultural problematics. Based on the results, opportunities for productive change are identified. These include leadership visibly supporting and enacting overall organizational narratives, making greater efforts in consistently ‘closing the loop’, regularly sharing how student input effects change, employing strategies of crisis communication more often, strengthening communication infrastructure, ensuring structures facilitate effective operations and change efforts, and highlighting change efforts in informational communication. Organizational communication and communications are not soft-skills, or of secondary concern within organizations, rather they are foundational in nature and serve to educate/inform all stakeholders. As primary stakeholders, students and their success directly affect the accomplishment of organizational goals. This study demonstrates how inquiries about how students navigate their educational experience extends research-based knowledge and provides actionable knowledge for the improvement of organizational operations in UME.

Keywords: medical education programs, organizational communication, participatory research, qualitative mixed methods

Procedia PDF Downloads 115
226 Comparative Assessment of the Thermal Tolerance of Spotted Stemborer, Chilo partellus Swinhoe (Lepidoptera: Crambidae) and Its Larval Parasitoid, Cotesia sesamiae Cameron (Hymenoptera: Braconidae)

Authors: Reyard Mutamiswa, Frank Chidawanyika, Casper Nyamukondiwa

Abstract:

Under stressful thermal environments, insects adjust their behaviour and physiology to maintain key life-history activities and improve survival. For interacting species, mutual or antagonistic, thermal stress may affect the participants in differing ways, which may then affect the outcome of the ecological relationship. In agroecosystems, this may be the fate of relationships between insect pests and their antagonistic parasitoids under acute and chronic thermal variability. Against this background, we therefore investigated the thermal tolerance of different developmental stages of Chilo partellus Swinhoe (Lepidoptera: Crambidae) and its larval parasitoid Cotesia sesamiae Cameron (Hymenoptera: Braconidae) using both dynamic and static protocols. In laboratory experiments, we determined lethal temperature assays (upper and lower lethal temperatures) using direct plunge protocols in programmable water baths (Systronix, Scientific, South Africa), effects of ramping rate on critical thermal limits following standardized protocols using insulated double-jacketed chambers (‘organ pipes’) connected to a programmable water bath (Lauda Eco Gold, Lauda DR.R. Wobser GMBH and Co. KG, Germany), supercooling points (SCPs) following dynamic protocols using a Pico logger connected to a programmable water bath, heat knock-down time (HKDT) and chill-coma recovery (CCRT) time following static protocols in climate chambers (HPP 260, Memmert GmbH + Co.KG, Germany) connected to a camera (HD Covert Network Camera, DS-2CD6412FWD-20, Hikvision Digital Technology Co., Ltd, China). When exposed for two hours to a static temperature, lower lethal temperatures ranged -9 to 6; -14 to -2 and -1 to 4ºC while upper lethal temperatures ranged from 37 to 48; 41 to 49 and 36 to 39ºC for C. partellus eggs, larvae and C. sesamiae adults respectively. Faster heating rates improved critical thermal maxima (CTmax) in C. partellus larvae and adult C. partellus and C. sesamiae. Lower cooling rates improved critical thermal minima (CTmin) in C. partellus and C. sesamiae adults while compromising CTmin in C. partellus larvae. The mean SCPs for C. partellus larvae, pupae and adults were -11.82±1.78, -10.43±1.73 and -15.75±2.47 respectively with adults having the lowest SCPs. Heat knock-down time and chill-coma recovery time varied significantly between C. partellus larvae and adults. Larvae had higher HKDT than adults, while the later recovered significantly faster following chill-coma. Current results suggest developmental stage differences in C. partellus thermal tolerance (with respect to lethal temperatures and critical thermal limits) and a compromised temperature tolerance of parasitoid C. sesamiae relative to its host, suggesting potential asynchrony between host-parasitoid population phenology and consequently biocontrol efficacy under global change. These results have broad implications to biological pest management insect-natural enemy interactions under rapidly changing thermal environments.

Keywords: chill-coma recovery time, climate change, heat knock-down time, lethal temperatures, supercooling point

Procedia PDF Downloads 238
225 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 136
224 Social Skills as a Significant Aspect of a Successful Start of Compulsory Education

Authors: Eva Šmelová, Alena Berčíková

Abstract:

The issue of school maturity and readiness of a child for a successful start of compulsory education is one of the long-term monitored areas, especially in the context of education and psychology. In the context of the curricular reform in the Czech Republic, the issue has recently gained importance. Analyses of research in this area suggest a lack of a broader overview of indicators informing about the current level of children’s school maturity and school readiness. Instead, various studies address partial issues. Between 2009 and 2013 a research study was performed at the Faculty of Education, Palacký University Olomouc (Czech Republic) focusing on children’s maturity and readiness for compulsory education. In this study, social skills were of marginal interest; the main focus was on the mental area. This previous research is smoothly linked with the present study, the objective of which is to identify the level of school maturity and school readiness in selected characteristics of social skills as part of the adaptation process after enrolment in compulsory education. In this context, the following research question has been formulated: During the process of adaptation to the school environment, which social skills are weakened? The method applied was observation, for the purposes of which the authors developed a research tool – record sheet with 11 items – social skills that a child should have by the end of preschool education. The items were assessed by first-grade teachers at the beginning of the school year. The degree of achievement and intensity of the skills were assessed for each child using an assessment scale. In the research, the authors monitored a total of three independent variables (gender, postponement of school attendance, participation in inclusive education). The effect of these independent variables was monitored using 11 dependent variables. These variables are represented by the results achieved in selected social skills. Statistical data processing was assisted by the Computer Centre of Palacký University Olomouc. Statistical calculations were performed using SPSS v. 12.0 for Windows and STATISTICA: StatSoft STATISTICA CR, Cz (software system for data analysis). The research sample comprised 115 children. In their paper, the authors present the results of the research and at the same time point to possible areas of further investigation. They also highlight possible risks associated with weakened social skills.

Keywords: compulsory education, curricular reform, educational diagnostics, pupil, school curriculum, school maturity, school readiness, social skills

Procedia PDF Downloads 251
223 Seismic Impact and Design on Buried Pipelines

Authors: T. Schmitt, J. Rosin, C. Butenweg

Abstract:

Seismic design of buried pipeline systems for energy and water supply is not only important for plant and operational safety, but in particular for the maintenance of supply infrastructure after an earthquake. Past earthquakes have shown the vulnerability of pipeline systems. After the Kobe earthquake in Japan in 1995 for instance, in some regions the water supply was interrupted for almost two months. The present paper shows special issues of the seismic wave impacts on buried pipelines, describes calculation methods, proposes approaches and gives calculation examples. Buried pipelines are exposed to different effects of seismic impacts. This paper regards the effects of transient displacement differences and resulting tensions within the pipeline due to the wave propagation of the earthquake. Other effects are permanent displacements due to fault rupture displacements at the surface, soil liquefaction, landslides and seismic soil compaction. The presented model can also be used to calculate fault rupture induced displacements. Based on a three-dimensional Finite Element Model parameter studies are performed to show the influence of several parameters such as incoming wave angle, wave velocity, soil depth and selected displacement time histories. In the computer model, the interaction between the pipeline and the surrounding soil is modeled with non-linear soil springs. A propagating wave is simulated affecting the pipeline punctually independently in time and space. The resulting stresses mainly are caused by displacement differences of neighboring pipeline segments and by soil-structure interaction. The calculation examples focus on pipeline bends as the most critical parts. Special attention is given to the calculation of long-distance heat pipeline systems. Here, in regular distances expansion bends are arranged to ensure movements of the pipeline due to high temperature. Such expansion bends are usually designed with small bending radii, which in the event of an earthquake lead to high bending stresses at the cross-section of the pipeline. Therefore, Karman's elasticity factors, as well as the stress intensity factors for curved pipe sections, must be taken into account. The seismic verification of the pipeline for wave propagation in the soil can be achieved by observing normative strain criteria. Finally, an interpretation of the results and recommendations are given taking into account the most critical parameters.

Keywords: buried pipeline, earthquake, seismic impact, transient displacement

Procedia PDF Downloads 187
222 Useful Lessons from the Success of Physics Outreach in Jamaica

Authors: M. J. Ponnambalam

Abstract:

Physics Outreach in Jamaica has nearly tripled the number of students doing Introductory Calculus-based Physics at the University of the West Indies (UWI, Mona) within 5 years, and thus has shown the importance of Physics Teaching & Learning in Informal Settings. In 1899, the first president of the American Physical Society called Physics, “the science above all sciences.” Sure enough, exactly one hundred years later, Time magazine proclaims Albert Einstein, “Person of the Century.” Unfortunately, Physics seems to be losing that glow in this century. Many countries, big and small, are finding it difficult to attract bright young minds to pursue Physics. At UWI, Mona, the number of students in first year Physics dropped to an all-time low of 81 in 2006, from more than 200 in the nineteen eighties, spelling disaster for the Physics Department! The author of this paper launched an aggressive Physics Outreach that same year, aimed at conveying to the students and the general public the following messages: i) Physics is an exciting intellectual enterprise, full of fun and delight. ii) Physics is very helpful in understanding how things like TV, CD player, car, computer, X-ray, CT scan, MRI, etc. work. iii) The critical and analytical thinking developed in the study of Physics is of inestimable value in almost any field. iv) Physics is the core subject for Science and Technology, and hence of national development. Science Literacy is a ‘must’ for any nation in the 21st century. Hence, the Physics Outreach aims at reaching out to every person, through every possible means. The Outreach work is split into the following target groups: i) Universities, ii) High Schools iii) Middle Schools, iv) Primary Schools, v) General Public, and vi) Physics teachers in High Schools. The programmes, tools and best practices are adjusted to suit each target group. The feedback from each group is highly positive. e.g. In February 2014, the author conducted in 3 Primary Schools the Interactive Show on ‘Science Is Fun’ to stimulate 290 students’ interest in Science – with lively and interesting demonstrations and experiments in a highly interactive way, using dramatization, story-telling and dancing. The feedback: 47% found the Show ‘Exciting’ and 51% found it ‘Interesting’ – totaling an impressive 98%. When asked to describe the Show in their own words, the leading 4 responses were: ‘Fun’ (26%), ‘Interesting’ (20%), ‘Exciting’ (14%) and ‘Educational’ (10%) – confirming that ‘fun’ & ‘education’ can go together. The success of Physics Outreach in Jamaica verifies the following words of Chodos, Associate Executive Officer of the American Physical Society: “If we could get members to go to K-12 schools and levitate a magnet or something, we really think these efforts would bring great rewards.”

Keywords: physics education, physics popularization, UWI, Jamaica

Procedia PDF Downloads 407
221 An Analysis of Gamification in the Post-Secondary Classroom

Authors: F. Saccucci

Abstract:

Gamification has now started to take root in the post-secondary classroom. Educators have learned much about gamification to date but there is still a great deal to learn. One definition of gamification is the ability to engage post-secondary students with games that are fun and correlate to class room curriculum. There is no shortage of literature illustrating the advantages of gamification in the class room. This study is an extension of similar thought as well as an extension of a previous study where in class testing proved with the used of paired T-test that gamification did significantly improve the students’ understanding of subject material. Gamification itself in the class room can range from high end computer simulated software to paper based games of which both have advantages and disadvantages. This analysis used a paper based game to highlight certain qualitative advantages of gamification. The paper based game in this analysis was inexpensive, required low preparation time for the faculty member and consumed approximately 20 minutes of class room time. Data for the study was collected through in class student feedback surveys and narrative from the faculty member moderating the game. Students were randomly selected into groups of four. Qualitative advantages identified in this analysis included: 1. Students had a chance to meet, connect and know other students. 2. Students enjoyed the gamification process given there was a sense of fun and competition. 3. The post assessment that followed the simulation game was not part of their grade calculation therefore it was an opportunity to participate in a low risk activity whereby students could subsequently self-assess their understanding of the subject material. 4. In the view of the student, content knowledge did increase after the gamification process. These qualitative advantages identified in this analysis contribute to the argument that there should be an attempt to use gamification in today’s post-secondary class room. The analysis also highlighted that eighty (80) percent of the respondents believe twenty minutes devoted to the gamification process was appropriate, however twenty (20) percentage of respondents believed that rather than scheduling a gamification process and its post quiz in the last week, a review for the final exam may have been more useful. An additional study to this hopes to determine if the scheduling of the gamification had any correlation to a percentage of the students not wanting to be engaged in the process. As well, the additional study hopes to determine at what incremental level of time invested in class room gamification produce no material incremental benefits to the student as well as determine if any correlation exist between respondents preferring not to have it at the end of the semester to students not believing the gamification process added to the increase of their curricular knowledge.

Keywords: gamification, inexpensive, non-quantitative advantages, post-secondary

Procedia PDF Downloads 211