Search results for: virtual grid
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2169

Search results for: virtual grid

189 The Instrumentalization of Digital Media in the Context of Sexualized Violence

Authors: Katharina Kargel, Frederic Vobbe

Abstract:

Sexual online grooming is generally defined as digital interactions for the purpose of sexual exploitation of children or minors, i.e. as a process for preparing and framing sexual child abuse. Due to its conceptual history, sexual online grooming is often associated with perpetrators who are previously unknown to those affected. While the strategies of perpetrators and the perception of those affected are increasingly being investigated, the instrumentalisation of digital media has not yet been researched much. Therefore, the present paper aims at contributing to this research gap by examining in what kind of ways perpetrators instrumentalise digital media. Our analyses draw on 46 case documentations and 18 interviews with those affected. The cases and the partly narrative interviews were collected by ten cooperating specialist centers working on sexualized violence in childhood and youth. For this purpose, we designed a documentation grid allowing for a detailed case reconstruction i.e. including information on the violence, digital media use and those affected. By using Reflexive Grounded Theory, our analyses emphasize a) the subjective benchmark of professional practitioners as well as those affected and b) the interpretative implications resulting from our researchers’ subjective and emotional interaction with the data material. It should first be noted that sexualized online grooming can result in both online and offline sexualized violence as well as hybrid forms. Furthermore, the perpetrators either come from the immediate social environment of those affected or are unknown to them. The perpetrator-victim relationship plays a more important role with regard to the question of the instrumentalisation of digital media than the question of the space (on vs. off) in which the primary violence is committed. Perpetrators unknown to those affected instrumentalise digital media primarily to establish a sexualized system of norms, which is usually embedded in a supposed love relationship. In some cases, after an initial exchange of sexualized images or video recordings, a latent play on the position of power takes place. In the course of the grooming process, perpetrators from the immediate social environment increasingly instrumentalise digital media to establish an explicit relationship of power and dependence, which is directly determined by coercion, threats and blackmail. The knowledge of possible vulnerabilities is strategically used in the course of maintaining contact. The above explanations lead to the conclusion that the motive for the crime plays an essential role in the question of the instrumentalisation of digital media. It is therefore not surprising that it is mostly the near-field perpetrators without commercial motives who initiate a spiral of violence and stress by digitally distributing sexualized (violent) images and video recordings within the reference system of those affected.

Keywords: sexualized violence, children and youth, grooming, offender strategies, digital media

Procedia PDF Downloads 183
188 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation

Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber

Abstract:

Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.

Keywords: indoor power line, fault location, fault map trace, series arc fault

Procedia PDF Downloads 137
187 BiVO₄‑Decorated Graphite Felt as Highly Efficient Negative Electrode for All-Vanadium Redox Flow Batteries

Authors: Daniel Manaye Kabtamu, Anteneh Wodaje Bayeh

Abstract:

With the development and utilization of new energy technology, people’s demand for large-scale energy storage system has become increasingly urgent. Vanadium redox flow battery (VRFB) is one of the most promising technologies for grid-scale energy storage applications because of numerous attractive features, such as long cycle life, high safety, and flexible design. However, the relatively low energy efficiency and high production cost of the VRFB still limit its practical implementations. It is of great attention to enhance its energy efficiency and reduce its cost. One of the main components of VRFB that can impressively impact the efficiency and final cost is the electrode materials, which provide the reactions sites for redox couples (V₂₊/V³⁺ and VO²⁺/VO₂⁺). Graphite felt (GF) is a typical carbon-based material commonly employed as electrode for VRFB due to low-cost, good chemical and mechanical stability. However, pristine GF exhibits insufficient wettability, low specific surface area, and poor kinetics reversibility, leading to low energy efficiency of the battery. Therefore, it is crucial to further modify the GF electrode to improve its electrochemical performance towards VRFB by employing active electrocatalysts, such as less expensive metal oxides. This study successfully fabricates low-cost plate-like bismuth vanadate (BiVO₄) material through a simple one-step hydrothermal route, employed as an electrocatalyst to adorn the GF for use as the negative electrode in VRFB. The experimental results show that BiVO₄-3h exhibits the optimal electrocatalytic activity and reversibility for the vanadium redox couples among all samples. The energy efficiency of the VRFB cell assembled with BiVO₄-decorated GF as the negative electrode is found to be 75.42% at 100 mA cm−2, which is about 10.24% more efficient than that of the cell assembled with heat-treated graphite felt (HT-GF) electrode. The possible reasons for the activity enhancement can be ascribed to the existence of oxygen vacancies in the BiVO₄ lattice structure and the relatively high surface area of BiVO₄, which provide more active sites for facilitating the vanadium redox reactions. Furthermore, the BiVO₄-GF electrode obstructs the competitive irreversible hydrogen evolution reaction on the negative side of the cell, and it also has better wettability. Impressively, BiVO₄-GF as the negative electrode shows good stability over 100 cycles. Thus, BiVO₄-GF is a promising negative electrode candidate for practical VRFB applications.

Keywords: BiVO₄ electrocatalyst, electrochemical energy storage, graphite felt, vanadium redox flow battery

Procedia PDF Downloads 1573
186 Computational Approach to Identify Novel Chemotherapeutic Agents against Multiple Sclerosis

Authors: Syed Asif Hassan, Tabrej Khan

Abstract:

Multiple sclerosis (MS) is a chronic demyelinating autoimmune disorder, of the central nervous system (CNS). In the present scenario, the current therapies either do not halt the progression of the disease or have side effects which limit the usage of current Disease Modifying Therapies (DMTs) for a longer period of time. Therefore, keeping the current treatment failure schema, we are focusing on screening novel analogues of the available DMTs that specifically bind and inhibit the Sphingosine1-phosphate receptor1 (S1PR1) thereby hindering the lymphocyte propagation toward CNS. The novel drug-like analogs molecule will decrease the frequency of relapses (recurrence of the symptoms associated with MS) with higher efficacy and lower toxicity to human system. In this study, an integrated approach involving ligand-based virtual screening protocol (Ultrafast Shape Recognition with CREDO Atom Types (USRCAT)) to identify the non-toxic drug like analogs of the approved DMTs were employed. The potency of the drug-like analog molecules to cross the Blood Brain Barrier (BBB) was estimated. Besides, molecular docking and simulation using Auto Dock Vina 1.1.2 and GOLD 3.01 were performed using the X-ray crystal structure of Mtb LprG protein to calculate the affinity and specificity of the analogs with the given LprG protein. The docking results were further confirmed by DSX (DrugScore eXtented), a robust program to evaluate the binding energy of ligands bound to the ligand binding domain of the Mtb LprG lipoprotein. The ligand, which has a higher hypothetical affinity, also has greater negative value. Further, the non-specific ligands were screened out using the structural filter proposed by Baell and Holloway. Based on the USRCAT, Lipinski’s values, toxicity and BBB analysis, the drug-like analogs of fingolimod and BG-12 showed that RTL and CHEMBL1771640, respectively are non-toxic and permeable to BBB. The successful docking and DSX analysis showed that RTL and CHEMBL1771640 could bind to the binding pocket of S1PR1 receptor protein of human with greater affinity than as compared to their parent compound (Fingolimod). In this study, we also found that all the drug-like analogs of the standard MS drugs passed the Bell and Holloway filter.

Keywords: antagonist, binding affinity, chemotherapeutics, drug-like, multiple sclerosis, S1PR1 receptor protein

Procedia PDF Downloads 256
185 Developing a Maturity Model of Digital Twin Application for Infrastructure Asset Management

Authors: Qingqing Feng, S. Thomas Ng, Frank J. Xu, Jiduo Xing

Abstract:

Faced with unprecedented challenges including aging assets, lack of maintenance budget, overtaxed and inefficient usage, and outcry for better service quality from the society, today’s infrastructure systems has become the main focus of many metropolises to pursue sustainable urban development and improve resilience. Digital twin, being one of the most innovative enabling technologies nowadays, may open up new ways for tackling various infrastructure asset management (IAM) problems. Digital twin application for IAM, as its name indicated, represents an evolving digital model of intended infrastructure that possesses functions including real-time monitoring; what-if events simulation; and scheduling, maintenance, and management optimization based on technologies like IoT, big data and AI. Up to now, there are already vast quantities of global initiatives of digital twin applications like 'Virtual Singapore' and 'Digital Built Britain'. With digital twin technology permeating the IAM field progressively, it is necessary to consider the maturity of the application and how those institutional or industrial digital twin application processes will evolve in future. In order to deal with the gap of lacking such kind of benchmark, a draft maturity model is developed for digital twin application in the IAM field. Firstly, an overview of current smart cities maturity models is given, based on which the draft Maturity Model of Digital Twin Application for Infrastructure Asset Management (MM-DTIAM) is developed for multi-stakeholders to evaluate and derive informed decision. The process of development follows a systematic approach with four major procedures, namely scoping, designing, populating and testing. Through in-depth literature review, interview and focus group meeting, the key domain areas are populated, defined and iteratively tuned. Finally, the case study of several digital twin projects is conducted for self-verification. The findings of the research reveal that: (i) the developed maturity model outlines five maturing levels leading to an optimised digital twin application from the aspects of strategic intent, data, technology, governance, and stakeholders’ engagement; (ii) based on the case study, levels 1 to 3 are already partially implemented in some initiatives while level 4 is on the way; and (iii) more practices are still needed to refine the draft to be mutually exclusive and collectively exhaustive in key domain areas.

Keywords: digital twin, infrastructure asset management, maturity model, smart city

Procedia PDF Downloads 157
184 AI-Powered Conversation Tools - Chatbots: Opportunities and Challenges That Present to Academics within Higher Education

Authors: Jinming Du

Abstract:

With the COVID-19 pandemic beginning in 2020, many higher education institutions and education systems are turning to hybrid or fully distance online courses to maintain social distance and provide a safe virtual space for learning and teaching. However, the majority of faculty members were not well prepared for the shift to blended or distance learning. Communication frustrations are prevalent in both hybrid and full-distance courses. A systematic literature review was conducted by a comprehensive analysis of 1688 publications that focused on the application of the adoption of chatbots in education. This study aimed to explore instructors' experiences with chatbots in online and blended undergraduate English courses. Language learners are overwhelmed by the variety of information offered by many online sites. The recently emerged chatbots (e.g.: ChatGPT) are slightly superior in performance as compared to those traditional through previous technologies such as tapes, video recorders, and websites. The field of chatbots has been intensively researched, and new methods have been developed to demonstrate how students can best learn and practice a new language in the target language. However, it is believed that among the many areas where chatbots are applied, while chatbots have been used as effective tools for communicating with business customers, in consulting and targeting areas, and in the medical field, chatbots have not yet been fully explored and implemented in the field of language education. This issue is challenging enough for language teachers; they need to study and conduct research carefully to clarify it. Pedagogical chatbots may alleviate the perception of a lack of communication and feedback from instructors by interacting naturally with students through scaffolding the understanding of those learners, much like educators do. However, educators and instructors lack the proficiency to effectively operate this emerging AI chatbot technology and require comprehensive study or structured training to attain competence. There is a gap between language teachers’ perceptions and recent advances in the application of AI chatbots to language learning. The results of the study found that although the teachers felt that the chatbots did the best job of giving feedback, the teachers needed additional training to be able to give better instructions and to help them assist in teaching. Teachers generally perceive the utilization of chatbots to offer substantial assistance to English language instruction.

Keywords: artificial intelligence in education, chatbots, education and technology, education system, pedagogical chatbot, chatbots and language education

Procedia PDF Downloads 66
183 Revisiting Historical Illustrations in the Age of Digital Anatomy Education

Authors: Julia Wimmers-Klick

Abstract:

In the contemporary study of anatomy, medical students utilize a diverse array of resources, including lab handouts, lectures, and, increasingly, digital media such as interactive anatomy apps and digital images. Notably, a significant shift has occurred, with fewer students possessing traditional anatomy atlases or books, reflecting a broader trend towards digital approaches like Virtual Reality, Augmented Reality, and web-based programs. This paper seeks to explore the evolution of anatomy education by contrasting current digital tools with historical resources, such as classical anatomical illustrations and atlases, to assess their relevance and potential benefits in modern medical education. Through a comprehensive literature review, the development of anatomical illustrations is traced from the textual descriptions of Galen to the detailed and artistic representations of Da Vinci, Vesalius, and later anatomists. The examination includes how the printing press facilitated the dissemination of anatomical knowledge, transforming covert dissections into public spectacles and formalized teaching practices. Historical illustrations, often influenced by societal, religious, and aesthetic contexts, not only served educational purposes but also reflected the prevailing medical knowledge and ethical standards of their times. Critical questions are raised about the place of historical illustrations in today's anatomy curriculum. Specifically, their potential to teach critical thinking, highlight the history of medicine, and offer unique insights into past societal conditions are explored. These resources are viewed in their context, including the lack of diversity and the presence of ethical concerns, such as the use of illustrations from unethical sources like Pernkopf’s atlas. In conclusion, while digital tools offer innovative ways to visualize and interact with anatomical structures, historical illustrations provide irreplaceable value in understanding the evolution of medical knowledge and practice. The study advocates for a balanced approach that integrates traditional and modern resources to enrich medical education, promote critical thinking, and provide a comprehensive understanding of anatomy. Future research should investigate the optimal combination of these resources to meet the evolving needs of medical learners and the implications of the digital shift in anatomy education.

Keywords: human anatomy, historical illustrations, historical context, medical education

Procedia PDF Downloads 21
182 A Framework of Virtualized Software Controller for Smart Manufacturing

Authors: Pin Xiu Chen, Shang Liang Chen

Abstract:

A virtualized software controller is developed in this research to replace traditional hardware control units. This virtualized software controller transfers motion interpolation calculations from the motion control units of end devices to edge computing platforms, thereby reducing the end devices' computational load and hardware requirements and making maintenance and updates easier. The study also applies the concept of microservices, dividing the control system into several small functional modules and then deploy into a cloud data server. This reduces the interdependency among modules and enhances the overall system's flexibility and scalability. Finally, with containerization technology, the system can be deployed and started in a matter of seconds, which is more efficient than traditional virtual machine deployment methods. Furthermore, this virtualized software controller communicates with end control devices via wireless networks, making the placement of production equipment or the redesign of processes more flexible and no longer limited by physical wiring. To handle the large data flow and maintain low-latency transmission, this study integrates 5G technology, fully utilizing its high speed, wide bandwidth, and low latency features to achieve rapid and stable remote machine control. An experimental setup is designed to verify the feasibility and test the performance of this framework. This study designs a smart manufacturing site with a 5G communication architecture, serving as a field for experimental data collection and performance testing. The smart manufacturing site includes one robotic arm, three Computer Numerical Control machine tools, several Input/Output ports, and an edge computing architecture. All machinery information is uploaded to edge computing servers and cloud servers via 5G communication and the Internet of Things framework. After analysis and computation, this information is converted into motion control commands, which are transmitted back to the relevant machinery for motion control through 5G communication. The communication time intervals at each stage are calculated using the C++ chrono library to measure the time difference for each command transmission. The relevant test results will be organized and displayed in the full-text.

Keywords: 5G, MEC, microservices, virtualized software controller, smart manufacturing

Procedia PDF Downloads 82
181 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning

Authors: Yangzhi Li

Abstract:

Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.

Keywords: robotic construction, robotic assembly, visual guidance, machine learning

Procedia PDF Downloads 86
180 Generalized Up-downlink Transmission using Black-White Hole Entanglement Generated by Two-level System Circuit

Authors: Muhammad Arif Jalil, Xaythavay Luangvilay, Montree Bunruangses, Somchat Sonasang, Preecha Yupapin

Abstract:

Black and white holes form the entangled pair⟨BH│WH⟩, where a white hole occurs when the particle moves at the same speed as light. The entangled black-white hole pair is at the center with the radian between the gap. When the speed of particle motion is slower than light, the black hole is gravitational (positive gravity), where the white hole is smaller than the black hole. On the downstream side, the entangled pair appears to have a black hole outside the gap increases until the white holes disappear, which is the emptiness paradox. On the upstream side, when moving faster than light, white holes form times tunnels, with black holes becoming smaller. It will continue to move faster and further when the black hole disappears and becomes a wormhole (Singularity) that is only a white hole in emptiness (Emptiness). This research studies use of black and white holes generated by a two-level circuit for communication transmission carriers, in which high ability and capacity of data transmission can be obtained. The black and white hole pair can be generated by the two-level system circuit when the speech of a particle on the circuit is equal to the speed of light. The black hole forms when the particle speed has increased from slower to equal to the light speed, while the white hole is established when the particle comes down faster than light. They are bound by the entangled pair, signal and idler, ⟨Signal│Idler⟩, and the virtual ones for the white hole, which has an angular displacement of half of π radian. A two-level system is made from an electronic circuit to create black and white holes bound by the entangled bits that are immune or cloning-free from thieves. Start by creating a wave-particle behavior when its speed is equal to light black hole is in the middle of the entangled pair, which is the two bit gate. The required information can be input into the system and wrapped by the black hole carrier. A timeline (Tunnel) occurs when the wave-particle speed is faster than light, from which the entangle pair is collapsed. The transmitted information is safely in the time tunnel. The required time and space can be modulated via the input for the downlink operation. The downlink is established when the particle speed is given by a frequency(energy) form is down and entered into the entangled gap, where this time the white hole is established. The information with the required destination is wrapped by the white hole and retrieved by the clients at the destination. The black and white holes are disappeared, and the information can be recovered and used.

Keywords: cloning free, time machine, teleportation, two-level system

Procedia PDF Downloads 74
179 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments

Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz

Abstract:

Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.

Keywords: LSTMs, streamflow, hyperparameters, hydrology

Procedia PDF Downloads 69
178 Biomechanical Modeling, Simulation, and Comparison of Human Arm Motion to Mitigate Astronaut Task during Extra Vehicular Activity

Authors: B. Vadiraj, S. N. Omkar, B. Kapil Bharadwaj, Yash Vardhan Gupta

Abstract:

During manned exploration of space, missions will require astronaut crewmembers to perform Extra Vehicular Activities (EVAs) for a variety of tasks. These EVAs take place after long periods of operations in space, and in and around unique vehicles, space structures and systems. Considering the remoteness and time spans in which these vehicles will operate, EVA system operations should utilize common worksites, tools and procedures as much as possible to increase the efficiency of training and proficiency in operations. All of the preparations need to be carried out based on studies of astronaut motions. Until now, development and training activities associated with the planned EVAs in Russian and U.S. space programs have relied almost exclusively on physical simulators. These experimental tests are expensive and time consuming. During the past few years a strong increase has been observed in the use of computer simulations due to the fast developments in computer hardware and simulation software. Based on this idea, an effort to develop a computational simulation system to model human dynamic motion for EVA is initiated. This study focuses on the simulation of an astronaut moving the orbital replaceable units into the worksites or removing them from the worksites. Our physics-based methodology helps fill the gap in quantitative analysis of astronaut EVA by providing a multisegment human arm model. Simulation work described in the study improves on the realism of previous efforts, incorporating joint stops to account for the physiological limits of range of motion. To demonstrate the utility of this approach human arm model is simulated virtually using ADAMS/LifeMOD® software. Kinematic mechanism for the astronaut’s task is studied from joint angles and torques. Simulation results obtained is validated with numerical simulation based on the principles of Newton-Euler method. Torques determined using mathematical model are compared among the subjects to know the grace and consistency of the task performed. We conclude that due to uncertain nature of exploration-class EVA, a virtual model developed using multibody dynamics approach offers significant advantages over traditional human modeling approaches.

Keywords: extra vehicular activity, biomechanics, inverse kinematics, human body modeling

Procedia PDF Downloads 342
177 Identification of a Lead Compound for Selective Inhibition of Nav1.7 to Treat Chronic Pain

Authors: Sharat Chandra, Zilong Wang, Ru-Rong Ji, Andrey Bortsov

Abstract:

Chronic pain (CP) therapeutic approaches have limited efficacy. As a result, doctors are prescribing opioids for chronic pain, leading to opioid overuse, abuse, and addiction epidemic. Therefore, the development of effective and safe CP drugs remains an unmet medical need. Voltage-gated sodium (Nav) channels act as cardiovascular and neurological disorder’s molecular targets. Nav channels selective inhibitors are hard to design because there are nine closely-related isoforms (Nav1.1-1.9) that share the protein sequence segments. We are targeting the Nav1.7 found in the peripheral nervous system and engaged in the perception of pain. The objective of this project was to screen a 1.5 million compound library for identification of inhibitors for Nav1.7 with analgesic effect. In this study, we designed a protocol for identification of isoform-selective inhibitors of Nav1.7, by utilizing the prior information on isoform-selective antagonists. First, a similarity search was performed; then the identified hits were docked into a binding site on the fourth voltage-sensor domain (VSD4) of Nav1.7. We used the FTrees tool for similarity searching and library generation; the generated library was docked in the VSD4 domain binding site using FlexX and compounds were shortlisted using a FlexX score and SeeSAR hyde scoring. Finally, the top 25 compounds were tested with molecular dynamics simulation (MDS). We reduced our list to 9 compounds based on the MDS root mean square deviation plot and obtained them from a vendor for in vitro and in vivo validation. Whole-cell patch-clamp recordings in HEK-293 cells and dorsal root ganglion neurons were conducted. We used patch pipettes to record transient Na⁺ currents. One of the compounds reduced the peak sodium currents in Nav1.7-HEK-293 stable cell line in a dose-dependent manner, with IC50 values at 0.74 µM. In summary, our computer-aided analgesic discovery approach allowed us to develop pre-clinical analgesic candidate with significant reduction of time and cost.

Keywords: chronic pain, voltage-gated sodium channel, isoform-selective antagonist, similarity search, virtual screening, analgesics development

Procedia PDF Downloads 123
176 Depth of Field: Photographs, Narrative and Reflective Learning Resource for Health Professions Educators

Authors: Gabrielle Brand, Christopher Etherton-Beer

Abstract:

The learning landscape of higher education environment is changing, with an increased focus over the past decade on how educators might begin to cultivate reflective skills in health professions students. In addition, changing professional requirements demand that health professionals are adequately prepared to practice in today’s complex Australian health care systems, including responding to changing demographics of population ageing. To counteract a widespread perception of health professions students’ disinterest in caring for older persons, the authors will report on an exploratory, mixed method research study that used photographs, narrative and small group work to enhance medical and nursing students’ reflective learning experience. An innovative photo-elicitation technique and reflective questioning prompts were used to increase engagement, and challenge students to consider new perspectives (around ageing) by constructing shared storylines in small groups. The qualitative themes revealed how photographs, narratives and small group work created learning spaces for reflection whereby students could safely explore their own personal and professional values, beliefs and perspectives around ageing. By providing the space for reflection, the students reported how they found connection and meaning in their own learning through a process of self-exploration that often challenged their assumptions of both older people and themselves as future health professionals. By integrating cognitive and affective elements into the learning process, this research demonstrates the importance of embedding visual methodologies that enhance reflection and transformative learning. The findings highlight the importance of integrating the arts into predominantly empirically driven health professional curricula and can be used as a catalyst for individual and/or collective reflection which can potentially enhance empathy, insight and understanding of the lived experiences of older patients. Based on these findings, the authors have developed ‘Depth of Field: Exploring Ageing’ an innovative, interprofessional, digital reflective learning resource that uses Prezi Inc. software (storytelling tool that presents ideas on a virtual canvas) to enhance students’ reflective capacity in the higher education environment.

Keywords: narrative, photo-elicitation, reflective learning, qualitative research

Procedia PDF Downloads 284
175 Feminising Football and Its Fandom: The Ideological Construction of Women's Super League

Authors: Donna Woodhouse, Beth Fielding-Lloyd, Ruth Sequerra

Abstract:

This paper explores the structure and culture of the English Football Association (FA) the governing body of soccer in England, in relation to the development of the FA Women’s Super League (WSL). In doing so, it examines the organisation’s journey from banning the sport in 1921 to establishing the country’s first semi professional female soccer league in 2011. As the FA has a virtual monopoly on defining the structures of the elite game, we attempted to understand its behaviour in the context of broader issues of power, control and resistance by giving voice to the experiences of those affected by its decisions. Observations were carried out at 39 matches over three years. Semi structured interviews with 17 people involved in the women’s game, identified via snowball sampling, were also carried out. Transcripts accompanied detailed field notes and were inductively coded to identify themes. What emerged was the governing body’s desire to create a new product, jettisoning the long history of the women’s game in order to shape and control the sport in a way it is no longer able to, with the elite male club game. The League created was also shaped by traditional conceptualisations of gender, in terms of the portrayal of its style of play and target audience, setting increased participation and spectatorship targets as measures of ‘success’. The national governing body has demonstrated pseudo inclusion and a lack of enthusiasm for the implementation of equity reforms, driven by a belief that the organisation is already representative, fair and accessible. Despite a consistent external pressure, the Football Association is still dominated at its most senior levels by males. Via claiming to hold a monopoly on expertise around the sport, maintaining complex committee structures and procedures, and with membership rules rooted in the amateur game, it remains a deeply gendered organisation, resistant to structural and cultural change. In WSL, the FA's structure and culture have created a franchise over which it retains almost complete control, dictating the terms of conditions of entry and marginalising alternative voices. The organisation presents a feminised version of both play and spectatorship, portraying the sport as a distinct, and lesser, version of soccer.

Keywords: football association, organisational culture, soccer, women’s super league

Procedia PDF Downloads 352
174 The Evolution of the Israel Defence Forces’ Information Operations: A Case Study of the Israel Defence Forces' Activities in the Information Domain 2006–2014

Authors: Teemu Saressalo

Abstract:

This article examines the evolution of the Israel Defence Forces’ information operation activities during an eight-year timespan from the 2006 war with Hezbollah to more recent operations such as Pillar of Defence and Protective Edge. To this end, the case study will show a change in the Israel Defence Forces’ activities in the information domain. In the 2006 war with Hezbollah in Lebanon, Israel inflicted enormous damage on the Lebanese infrastructure, leaving more than 1,200 people dead and 4,400 injured. Casualties among Hezbollah, Israel’s main adversary, were estimated to range from 250 to 700 fighters. Damage to the Lebanese infrastructure was estimated at over USD 2.5bn, with almost 2,000 houses and buildings damaged and destroyed. Even this amount of destruction did not force Hezbollah to yield and while both sides were claiming victory in the war, Israel paid a heavier price in political backlashes and loss of reputation, mainly due to failures in the media and the way in which the war was portrayed and perceived in Israel and abroad. Much of this can be credited to Hezbollah’s efficient use of the media, and Israel’s failure to do so. Israel managed the next conflict it was engaged in completely differently – it had learnt its lessons and built up new ways to counter its adversary’s propaganda and media operations. In Operation Cast Lead at the turn of 2009, Hamas, Israel’s adversary and Gaza’s dominating faction, was not able to utilize the media in the same way that Hezbollah had. By creating a virtual and physical barrier around the Gaza Strip, Israel almost totally denied its adversary access to the worldwide media, and by restricting the movement of journalists in the area, Israel could let its voice be heard above all. The operation Cast Lead began with a deception operation, which caught Hamas totally off guard. The 21-day campaign left the Gaza Strip devastated, but did not cause as much protest in Israel during the operation as the 2006 war did, mainly due to almost total Israeli dominance in the information dimension. The most important outcome from the Israeli perspective was the fact that Operation Cast Lead was assessed to be a success and the operation enjoyed domestic support along with support from many western nations, which had condemned Israeli actions in the 2006 war. Later conflicts have shown the same tendency towards virtually total dominance in the information domain, which has had an impact on target audiences across the world. Thus, it is clear that well-planned and conducted information operations are able to shape public opinion and influence decision-makers, although Israel might have been outpaced by its rivals.

Keywords: Hamas, Hezbollah, information operations, Israel Defence Forces

Procedia PDF Downloads 237
173 Site-based Internship Experiences: From Research to Implementation and Community Collaboration

Authors: Jamie Sundvall, Lisa Jennings

Abstract:

Site based field internship learning (SBL) is an educational approach within a Master’s of Social Work (MSW) university field placement department that promotes a more streamlined approach to the integration of theory and evidence based practices for social work students. The SBL model is founded on research in the field, consideration of current work force needs, United States national trends of MSW graduate skill and knowledge deficits, educational trends in students pursing a master’s degree in social work, and current social problems that require unique problem solving skills. This study explores the use of site-based learning in a hybrid social work program. In this setting, site based learning pairs online education courses and social work field education to create training opportunities for social work students within their own community and cultural context. Students engage in coursework in an online setting with both synchronous and asynchronous features that facilitate development of core competencies for MSW students. Through the SBL model, students are then partnered with faculty in a virtual course room and a university vetted site within their community. The study explores how this model of learning creates community partnerships, through which students engage in a learning loop to develop social work skills, while preparing students to address current community, social, and global issues with the engagement of technology. The goal of SBL is to more effectively equip social work students for practice according to current workforce demands, provide access to education and care to populations who have limited access, and create self-sustainable partnerships. Further, the model helps students learn integration of evidence based practices and helps instructors more effectively teach integration of ethics into practice. The study found that the SBL model increases the influence and professional relevance of the social work profession, and ultimately facilitates stronger approaches to integrating theory into practice. Current implementation of the practice in the United States will be presented in the study. dditionally, future research conceptualization of SBL models will be presented, in order to collaborate on advancing best approaches of translating theory into practice, according to the current needs of the profession and needs of social work students.

Keywords: collaboration, fieldwork, research, site-based learning, technology

Procedia PDF Downloads 125
172 Spatial Architecture Impact in Mediation Open Circuit Voltage Control of Quantum Solar Cell Recovery Systems

Authors: Moustafa Osman Mohammed

Abstract:

The photocurrent generations are influencing ultra-high efficiency solar cells based on self-assembled quantum dot (QD) nanostructures. Nanocrystal quantum dots (QD) provide a great enhancement toward solar cell efficiencies through the use of quantum confinement to tune absorbance across the solar spectrum enabled multi-exciton generation. Based on theoretical predictions, QDs have potential to improve systems efficiency in approximate regular electrons excitation intensity greater than 50%. In solar cell devices, an intermediate band formed by the electron levels in quantum dot systems. The spatial architecture is exploring how can solar cell integrate and produce not only high open circuit voltage (> 1.7 eV) but also large short-circuit currents due to the efficient absorption of sub-bandgap photons. In the proposed QD system, the structure allows barrier material to absorb wavelengths below 700 nm while multi-photon processes in the used quantum dots to absorb wavelengths up to 2 µm. The assembly of the electronic model is flexible to demonstrate the atoms and molecules structure and material properties to tune control energy bandgap of the barrier quantum dot to their respective optimum values. In terms of energy virtual conversion, the efficiency and cost of the electronic structure are unified outperform a pair of multi-junction solar cell that obtained in the rigorous test to quantify the errors. The milestone toward achieving the claimed high-efficiency solar cell device is controlling the edge causes of energy bandgap between the barrier material and quantum dot systems according to the media design limits. Despite this remarkable potential for high photocurrent generation, the achievable open-circuit voltage (Voc) is fundamentally limited due to non-radiative recombination processes in QD solar cells. The orientation of voltage recovery system is compared theoretically with experimental Voc variation in mediation upper–limit obtained one diode modeling form at the cells with different bandgap (Eg) as classified in the proposed spatial architecture. The opportunity for improvement Voc is valued approximately greater than 1V by using smaller QDs through QD solar cell recovery systems as confined to other micro and nano operations states.

Keywords: nanotechnology, photovoltaic solar cell, quantum systems, renewable energy, environmental modeling

Procedia PDF Downloads 156
171 Online Think–Pair–Share in a Third-Age Information and Communication Technology Course

Authors: Daniele Traversaro

Abstract:

Problem: Senior citizens have been facing a challenging reality as a result of strict public health measures designed to protect people from the COVID-19 outbreak. These include the risk of social isolation due to the inability of the elderly to integrate with technology. Never before have information and communication technology (ICT) skills become essential for their everyday life. Although third-age ICT education and lifelong learning are widely supported by universities and governments, there is a lack of literature on which teaching strategy/methodology to adopt in an entirely online ICT course aimed at third-age learners. This contribution aims to present an application of the Think-Pair-Share (TPS) learning method in an ICT third-age virtual classroom with an intergenerational approach to conducting online group labs and review activities. This collaborative strategy can help increase student engagement, promote active learning and online social interaction. Research Question: Is collaborative learning applicable and effective, in terms of student engagement and learning outcomes, for an entirely online third-age ICT introductory course? Methods: In the TPS strategy, a problem is posed by the teacher, students have time to think about it individually, and then they work in pairs (or small groups) to solve the problem and share their ideas with the entire class. We performed four experiments in the ICT course of the University of the Third Age of Genova (University of Genova, Italy) on the Microsoft Teams platform. The study cohort consisted of 26 students over the age of 45. Data were collected through online questionnaires. Two have been proposed, one at the end of the first activity and another at the end of the course. They consisted of five and three close-ended questions, respectively. The answers were on a Likert scale (from 1 to 4) except two questions (which asked the number of correct answers given individually and in groups) and the field for free comments/suggestions. Results: Results show that groups perform better than individual students (with scores greater than one order of magnitude) and that most students found it helpful to work in groups and interact with their peers. Insights: From these early results, it appears that TPS is applicable to an online third-age ICT classroom and useful for promoting discussion and active learning. Despite this, our experimentation has a number of limitations. First of all, the results highlight the need for more data to be able to perform a statistical analysis in order to determine the effectiveness of this methodology in terms of student engagement and learning outcomes as a future direction.

Keywords: collaborative learning, information technology education, lifelong learning, older adult education, think-pair-share

Procedia PDF Downloads 188
170 Gender Policies and Political Culture: An Examination of the Canadian Context

Authors: Chantal Maille

Abstract:

This paper is about gender-based analysis plus (GBA+), an intersectional gender policy used in Canada to assess the impact of policies and programs for men and women from different origins. It looks at Canada’s political culture to explain the nature of its gender policies. GBA+ is defined as an analysis method that makes it possible to assess the eventual effects of policies, programs, services, and other initiatives on women and men of different backgrounds because it takes account of gender and other identity factors. The ‘plus’ in the name serves to emphasize that GBA+ goes beyond gender to include an examination of a wide range of other related identity factors, such as age, education, language, geography, culture, and income. The point of departure for GBA+ is that women and men are not homogeneous populations and gender is never the only factor in defining a person’s identity; rather, it interacts with factors such as ethnic origin, age, disabilities, where the person lives, and other aspects of individual and social identity. GBA+ takes account of these factors and thus challenges notions of similarity or homogeneity within populations of women and men. Comparative analysis based on sex and gender may serve as a gateway to studying a given question, but women, men, girls, and boys do not form homogeneous populations. In the 1990s, intersectionality emerged as a new feminist framework. The popularity of the notion of intersectionality corresponds to a time when, in hindsight, the damage done to minoritized groups by state disengagement policies in concert with global intensification of neoliberalism, and vice versa, can be measured. Although GBA+ constitutes a form of intersectionalization of GBA, it must be understood that the two frameworks do not spring from a similar logic. Intersectionality first emerged as a dynamic analysis of differences between women that was oriented toward change and social justice, whereas GBA is a technique developed by state feminists in a context of analyzing governmental policies and aiming to promote equality between men and women. It can nevertheless be assumed that there might be interest in such a policy and program analysis grid that is decentred from gender and offers enough flexibility to take account of a group of inequalities. In terms of methodology, the research is supported by a qualitative analysis of governmental documents about GBA+ in Canada. Research findings identify links between Canadian gender policies and its political culture. In Canada, diversity has been taken into account as an element at the basis of gendered analysis of public policies since 1995. The GBA+ adopted by the government of Canada conveys an opening to intersectionality and a sensitivity to multiculturalism. The Canadian Multiculturalism Act, adopted 1988, proposes to recognize the fact that multiculturalism is a fundamental characteristic of the Canadian identity and heritage and constitutes an invaluable resource for the future of the country. In conclusion, Canada’s distinct political culture can be associated with the specific nature of its gender policies.

Keywords: Canada, gender-based analysis, gender policies, political culture

Procedia PDF Downloads 222
169 Perception of Eco-Music From the Contents the Earth’s Sound Ecosystem

Authors: Joni Asitashvili, Eka Chabashvili, Maya Virsaladze, Alexander Chokhonelidze

Abstract:

Studying the soundscape is a major challenge in many countries of the civilized world today. The sound environment and music itself are part of the Earth's ecosystem. Therefore, researching its positive or negative impact is important for a clean and healthy environment. The acoustics of nature gave people many musical ideas, and people enriched musical features and performance skills with the ability to imitate the surrounding sound. For example, a population surrounded by mountains invented the technique of antiphonal singing, which mimics the effect of an echo. Canadian composer Raymond Murray Schafer viewed the world as a kind of musical instrument with ever-renewing tuning. He coined the term "Soundscape" as a name of a natural environmental sound, including the sound field of the Earth. It can be said that from which the “music of nature” is constructed. In the 21st century, a new field–Ecomusicology–has emerged in the field of musical art to study the sound ecosystem and various issues related to it. Ecomusicology considers the interconnections between music, culture, and nature–According to the Aaron Allen. Eco-music is a field of ecomusicology concerning with the depiction and realization of practical processes using modern composition techniques. Finding an artificial sound source (instrumental or electronic) for the piece that will blend into the soundscape of Sound Oases. Creating a composition, which sounds in harmony with the vibrations of human, nature, environment, and micro- macrocosm as a whole; Currently, we are exploring the ambient sound of the Georgian urban and suburban environment to discover “Sound Oases" and compose Eco-music works. We called “Sound Oases" an environment with a specific sound of the ecosystem to use in the musical piece as an instrument. The most interesting examples of Eco-music are the round dances, which were already created in the BC era. In round dances people would feel the united energy. This urge to get united revealed itself in our age too, manifesting itself in a variety of social media. The virtual world, however, is not enough for a healthy interaction; we created plan of “contemporary round dance” in sound oasis, found during expedition in Georgian caves, where people interacted with cave's soundscape and eco-music, they feel each other sharing energy and listen to earth sound. This project could be considered a contemporary round dance, a long improvisation, particular type of art therapy, where everyone can participate in an artistic process. We would like to present research result of our eco-music experimental performance.

Keywords: eco-music, environment, sound, oasis

Procedia PDF Downloads 61
168 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model

Authors: Mohammad Zamani, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.

Keywords: circular vertical, spillway, numerical model, boundary conditions

Procedia PDF Downloads 86
167 The Life Skills Project: Client-Centered Approaches to Life Skills Acquisition for Homeless and At-Risk Populations

Authors: Leah Burton, Sara Cumming, Julianne DiSanto

Abstract:

Homelessness is a widespread and complex problem in Canada and around the globe. Many Canadians will face homelessness at least once in their lifetime, with several experiencing subsequent bouts or cyclical patterns of housing precarity. While a Housing First approach to homelessness is a long-standing and widely accepted best practice, it is also recognized that the acquisition of life skills is an effective way to reduce cycles of homelessness. Indeed, when individuals are provided with a range of life skills—such as (but not limited to) financial literacy, household management, interpersonal skills, critical thinking, and resource management—they are given the tools required to maintain long-term Housing for a lifetime; thus reducing a repetitive need for services. However, there is limited research regarding the best ways to teach life skills, a problem that has been further complicated in a post-pandemic world, where services are being delivered online or in a hybrid model of care. More than this, it is difficult to provide life skills on a large scale without losing a client-centered approach to services. This lack of client-centeredness is also seen in the lack of attention to culturally sensitive life skills, which consider the diverse needs of individuals and imbed equity, diversity, and inclusion (EDI) within the skills being taught. This study aims to fill these identified gaps in the literature by employing a community-engaged (CER) approach. Academic, government, funders, front-line staff, and clients at 15 not-for-profits from across the Greater Toronto Area in Ontario, Canada, collaborated to co-create a virtual, client-centric, EDI-informed life skill learning management system. A triangulation methodology was utilized for this research. An environmental scan was conducted for current best practices, and over 100 front-line staff (including workers, managers, and executive directors who work with homeless populations) participated in two separate Creative Problem Solving Sessions. Over 200 individuals with experience in homelessness completed quantitative and open-ended surveys. All sections of this research aimed to discover the areas of skills that individuals need to maintain Housing and to ascertain what a more client-driven EDI approach to life skills training should include. This presentation will showcase the findings on which life skills are deemed essential for homeless and precariously housed individuals.

Keywords: homelessness, housing first, life skills, community engaged research, client- centered

Procedia PDF Downloads 101
166 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives

Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes

Abstract:

The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.

Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system

Procedia PDF Downloads 121
165 Part Variation Simulations: An Industrial Case Study with an Experimental Validation

Authors: Narendra Akhadkar, Silvestre Cano, Christophe Gourru

Abstract:

Injection-molded parts are widely used in power system protection products. One of the biggest challenges in an injection molding process is shrinkage and warpage of the molded parts. All these geometrical variations may have an adverse effect on the quality of the product, functionality, cost, and time-to-market. The situation becomes more challenging in the case of intricate shapes and in mass production using multi-cavity tools. To control the effects of shrinkage and warpage, it is very important to correctly find out the input parameters that could affect the product performance. With the advances in the computer-aided engineering (CAE), different tools are available to simulate the injection molding process. For our case study, we used the MoldFlow insight tool. Our aim is to predict the spread of the functional dimensions and geometrical variations on the part due to variations in the input parameters such as material viscosity, packing pressure, mold temperature, melt temperature, and injection speed. The input parameters may vary during batch production or due to variations in the machine process settings. To perform the accurate product assembly variation simulation, the first step is to perform an individual part variation simulation to render realistic tolerance ranges. In this article, we present a method to simulate part variations coming from the input parameters variation during batch production. The method is based on computer simulations and experimental validation using the full factorial design of experiments (DoE). The robustness of the simulation model is verified through input parameter wise sensitivity analysis study performed using simulations and experiments; all the results show a very good correlation in the material flow direction. There exists a non-linear interaction between material and the input process variables. It is observed that the parameters such as packing pressure, material, and mold temperature play an important role in spread on functional dimensions and geometrical variations. This method will allow us in the future to develop accurate/realistic virtual prototypes based on trusted simulated process variation and, therefore, increase the product quality and potentially decrease the time to market.

Keywords: correlation, molding process, tolerance, sensitivity analysis, variation simulation

Procedia PDF Downloads 178
164 Towards the Development of Uncertainties Resilient Business Model for Driving the Solar Panel Industry in Nigeria Power Sector

Authors: Balarabe Z. Ahmad, Anne-Lorène Vernay

Abstract:

The emergence of electricity in Nigeria was dated back to 1896. The power plants have the potential to generate 12,522 MW of electric power. Whereas current dispatch is about 4,000 MW, access to electrification is about 60%, with consumption at 0.14 MWh/capita. The government embarked on energy reforms to mitigate energy poverty. The reform targeted the provision of electricity access to 75% of the population by 2020 and 90% by 2030. Growth of total electricity demand by a factor of 5 by 2035 had been projected. This means that Nigeria will require almost 530 TWh of electricity which can be delivered through generators with a capacity of 65 GW. Analogously, the geographical location of Nigeria has placed it in an advantageous position as the source of solar energy; the availability of a high sunshine belt is obvious in the country. The implication is that the far North, where energy poverty is high, equally has about twice the solar radiation as against southern Nigeria. Hence, the chance of generating solar electricity is 66% possible at 11850 x 103 GWh per year, which is one hundred times the current electricity consumption rate in the country. Harvesting these huge potentials may be a mirage if the entrepreneurs in the solar panel business are left with the conventional business models that are not uncertainty resilient. Currently, business entities in RE in Nigeria are uncertain of; accessing the national grid, purchasing potentials of cooperating organizations, currency fluctuation and interest rate increases. Uncertainties such as the security of projects and government policy are issues entrepreneurs must navigate to remain sustainable in the solar panel industry in Nigeria. The aim of this paper is to identify how entrepreneurial firms consider uncertainties in developing workable business models for commercializing solar energy projects in Nigeria. In an attempt to develop a novel business model, the paper investigated how entrepreneurial firms assess and navigate uncertainties. The roles of key stakeholders in helping entrepreneurs to manage uncertainties in the Nigeria RE sector were probed in the ongoing study. The study explored empirical uncertainties that are peculiar to RE entrepreneurs in Nigeria. A mixed-mode of research was embraced using qualitative data from face-to-face interviews conducted on the Solar Energy Entrepreneurs and the experts drawn from key stakeholders. Content analysis of the interview was done using Atlas. It is a nine qualitative tool. The result suggested that all stakeholders are required to synergize in developing an uncertainty resilient business model. It was opined that the RE entrepreneurs need modifications in the business recommendations encapsulated in the energy policy in Nigeria to strengthen their capability in delivering solar energy solutions to the yawning Nigerians.

Keywords: uncertainties, entrepreneurial, business model, solar-panel

Procedia PDF Downloads 149
163 Technological Affordances of a Mobile Fitness Application- A Role of Escapism and Social Outcome Expectation

Authors: Inje Cho

Abstract:

The leading health risks threatening the world today are associated with a modern lifestyle characterized by sedentary behavior, stress, anxiety, and an obesogenic food environment. To counter this alarming trend, the Centers for Disease Control and Prevention have proffered Physical Activity guidelines to bolster physical engagement. Concurrently, the burgeon of smartphones and mobile applications has witnessed a proliferation of fitness applications aimed at invigorating exercise adherence and real-time activity monitoring. Grounded in the Uses and gratification theory, this study delves into the technological affordances of mobile fitness applications, discerning the mediating influences of escapism and social outcome expectations on attitudes and exercise intention. The theory explains how individuals employ distinct communication mediums to satiate their exigencies and desires. Technological affordances manifest as attributes of emerging technologies that galvanize personal engagement in physical activities. Several features of mobile fitness applications include affordances for goal setting, virtual rewards, peer support, and exercise information. Escapism, denoting the inclination to disengage from normal routines, has emerged as a salient motivator for the consumption of new media. This study postulates that individual’s perceptions technological affordances within mobile fitness applications, can affect escapism and social outcome expectations, potentially influencing attitude, and behavior formation. Thus, the integrated model has been developed to empirically examine the interrelationships between technological affordances, escapism, social outcome expectations, and exercise intention. Structural Equation Modelling serves as the methodological tool, and a cohort of 400 Fitbit users shall be enlisted from the Prolific, data collection platform. A sequence of multivariate data analyses will scrutinize both the measurement and hypothesized structural models. By delving into the effects of mobile fitness applications, this study contributes to the growing of new media studies in sport management. Moreover, the novel integration of the uses and gratification theory, technological affordances, via the prism of escapism, illustrates the dynamics that underlies mobile fitness user’s attitudes and behavioral intentions. Therefore, the findings from this study contribute to theoretical understanding and provide pragmatic insights to developers and practitioners in optimizing the impact of mobile fitness applications.

Keywords: technological affordances, uses and gratification, mobile fitness apps, escapism, physical activity

Procedia PDF Downloads 80
162 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 222
161 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids

Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje

Abstract:

Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.

Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise

Procedia PDF Downloads 128
160 Progressive Damage Analysis of Mechanically Connected Composites

Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan

Abstract:

While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values ​​, and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.

Keywords: puck, finite element, bolted joint, composite

Procedia PDF Downloads 102