Search results for: PMS code
188 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor
Authors: Hao Yan, Xiaobing Zhang
Abstract:
The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model
Procedia PDF Downloads 89187 Health Psychology Intervention: Identifying Early Symptoms in Neurological Disorders
Authors: Simon B. N. Thompson
Abstract:
Early indicator of neurological disease has been proposed by the expanded Thompson Cortisol Hypothesis which suggests that yawning is linked to rises in cortisol levels. Cortisol is essential to the regulation of the immune system and pathological yawning is a symptom of multiple sclerosis (MS). Electromyography activity (EMG) in the jaw muscles typically rises when the muscles are moved – extended or flexed; and yawning has been shown to be highly correlated with cortisol levels in healthy people. It is likely that these elevated cortisol levels are also seen in people with MS. The possible link between EMG in the jaw muscles and rises in saliva cortisol levels during yawning were investigated in a randomized controlled trial of 60 volunteers aged 18-69 years who were exposed to conditions that were designed to elicit the yawning response. Saliva samples were collected at the start and after yawning, or at the end of the presentation of yawning-provoking stimuli, in the absence of a yawn, and EMG data was additionally collected during rest and yawning phases. Hospital Anxiety and Depression Scale, Yawning Susceptibility Scale, General Health Questionnaire, demographic, and health details were collected and the following exclusion criteria were adopted: chronic fatigue, diabetes, fibromyalgia, heart condition, high blood pressure, hormone replacement therapy, multiple sclerosis, and stroke. Significant differences were found between the saliva cortisol samples for the yawners, t (23) = -4.263, p = 0.000, as compared with the non-yawners between rest and post-stimuli, which was non-significant. There were also significant differences between yawners and non-yawners for the EMG potentials with the yawners having higher rest and post-yawning potentials. Significant evidence was found to support the Thompson Cortisol Hypothesis suggesting that rises in cortisol levels are associated with the yawning response. Further research is underway to explore the use of cortisol as a potential diagnostic tool as an assist to the early diagnosis of symptoms related to neurological disorders. Bournemouth University Research & Ethics approval granted: JC28/1/13-KA6/9/13. Professional code of conduct, confidentiality, and safety issues have been addressed and approved in the Ethics submission. Trials identification number: ISRCTN61942768. http://www.controlled-trials.com/isrctn/Keywords: cortisol, electromyography, neurology, yawning
Procedia PDF Downloads 589186 Performance of High Efficiency Video Codec over Wireless Channels
Authors: Mohd Ayyub Khan, Nadeem Akhtar
Abstract:
Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.Keywords: AWGN, forward error correction, HEVC, video coding, QAM
Procedia PDF Downloads 148185 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing
Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou
Abstract:
The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation
Procedia PDF Downloads 116184 Factors Influencing Telehealth Services for Diabetes Care in Nepal: A Mixed Method Study
Authors: Sumitra Sharma, Christina Parker, Kathleen Finlayson, Clint Douglas, Niall Higgins
Abstract:
Background: Telehealth services have potential to increase accessibility, utilization, and effectiveness of healthcare services. As the telehealth services are yet to integrate within regular hospital services in Nepal, the use of the telehealth services among adults with diabetes is scarce. Prior to implementation of telehealth services for adults with diabetes, it is necessary to examine influencing factors of telehealth services. Objective: This study aimed to investigate factors influencing telehealth services for diabetes care in Nepal. Methods: This study used a mixed-method study design which included a cross-sectional survey among adults with diabetes and semi-structured interviews among key healthcare professionals of Nepal. The study was conducted in a medical out-patient department of a tertiary hospital of Nepal. The survey adapted a previously validated questionnaire, while semi-structured questions for interviews were developed from literature review and experts consultation. All interviews were audio-recorded, and inductive content analysis was used to code transcripts and develop themes. For a survey, a descriptive analysis, chi-square test, and Mann Whitney U test were used to analyze the data. Results: One hundred adults with diabetes were participated in a survey, and seven healthcare professionals were recruited for interviews. In a survey, just over half of the participants (53%) were male, and others were female. Almost all participants (98%) owned a mobile phone, and 67% of them had a computer with internet access at home. Majority of participants had experience in using Facebook messenger (95%), followed by Viber (60%) and Zoom (26%). Almost all of the participants (96%) were willing to use telehealth services. There were significant associations between female sex and participants living 10 km away from the hospital with their willingness to use telehealth services. There was a significant association between participants' self-perception of good health status with their willingness to use video-conference calls and phone calls to use telehealth services. Seven themes were developed from interview data which are related to predisposing, reinforcing, and enabling factors influencing telehealth services for diabetes care in Nepal. Conclusion: In summary, several factors were found to influence the use of telehealth services for diabetes care in Nepal. For effective implementation of a sustainable telehealth services for adults with diabetes in Nepal, these factors need to be considered.Keywords: contributing factors, diabetes mellitus, developing countries, telemedicine, telecare
Procedia PDF Downloads 70183 Mobile Phone Text Reminders and Voice Call Follow-ups Improve Attendance for Community Retail Pharmacy Refills; Learnings from Lango Sub-region in Northern Uganda
Authors: Jonathan Ogwal, Louis H. Kamulegeya, John M. Bwanika, Davis Musinguzi
Abstract:
Introduction: Community retail Pharmacy drug distribution points (CRPDDP) were implemented in the Lango sub-region as part of the Ministry of Health’s response to improving access and adherence to antiretroviral treatment (ART). Clients received their ART refills from nearby local pharmacies; as such, the need for continuous engagement through mobile phone appointment reminders and health messages. We share learnings from the implementation of mobile text reminders and voice call follow-ups among ART clients attending the CRPDDP program in northern Uganda. Methods: A retrospective data review of electronic medical records from four pharmacies allocated for CRPDDP in the Lira and Apac districts of the Lango sub-region in Northern Uganda was done from February to August 2022. The process involved collecting phone contacts of eligible clients from the health facility appointment register and uploading them onto a messaging platform customized by Rapid-pro, an open-source software. Client information, including code name, phone number, next appointment date, and the allocated pharmacy for ART refill, was collected and kept confidential. Contacts received appointment reminder messages and other messages on positive living as an ART client. Routine voice call follow-ups were done to ascertain the picking of ART from the refill pharmacy. Findings: In total, 1,354 clients were reached from the four allocated pharmacies found in urban centers. 972 clients received short message service (SMS) appointment reminders, and 382 were followed up through voice calls. The majority (75%) of the clients returned for refills on the appointed date, 20% returned within four days after the appointment date, and the remaining 5% needed follow-up where they reported that they were not in the district by the appointment date due to other engagements. Conclusion: The use of mobile text reminders and voice call follow-ups improves the attendance of community retail pharmacy refills.Keywords: antiretroviral treatment, community retail drug distribution points, mobile text reminders, voice call follow-up
Procedia PDF Downloads 97182 Selenuranes as Cysteine Protease Inhibitors: Theorical Investigation on Model Systems
Authors: Gabriela D. Silva, Rodrigo L. O. R. Cunha, Mauricio D. Coutinho-Neto
Abstract:
In the last four decades the biological activities of selenium compounds has received great attention, particularly for hypervalent derivates from selenium (IV) used as enzyme inhibitors. The unregulated activity of cysteine proteases are related to the development of several pathologies, such as neurological disorders, cardiovascular diseases, obesity, rheumatoid arthritis, cancer and parasitic infections. These enzymes are therefore a valuable target for designing new small molecule inhibitors such as selenuranes. Even tough there has been advances in the synthesis and design of new selenuranes based inhibitors, little is known about their mechanism of action. It is a given that inhibition occurs through the reaction between the thiol group of the enzyme and the chalcogen atom. However, several open questions remain about the nature of the mechanism (associative vs. dissociative) and about the nature of the reactive species in solution under physiological conditions. In this work we performed a theoretical investigation on model systems to study the possible routes of substitution reactions. Nucleophiles may be present in biological systems, our interest is centered in the thiol groups from the cysteine proteases and the hydroxyls from the aqueous environment. We therefore expect this study to clarify the possibility of a route reaction in two stages, the first consisting of the substitution of chloro atoms by hydroxyl groups and then replacing these hydroxyl groups per thiol groups in selenuranes. The structures of selenuranes and nucleophiles were optimized using density function theory along the B3LYP functional and a 6-311+G(d) basis set. Solvent was treated using the IEFPCM method as implemented in the Gaussian 09 code. Our results indicate that hydrolysis from water react preferably with selenuranes, and then, they are replaced by the thiol group. It show the energy values of -106,0730423 kcal/mol for dople substituition by hydroxyl group and 96,63078511 kcal/mol for thiol group. The solvatation and pH reduction promotes this route, increasing the energy value for reaction with hydroxil group to -50,75637672 kcal/mol and decreasing the energy value for thiol to 7,917767189 kcal/mol. Alternative ways were analyzed for monosubstitution (considering the competition between Cl, OH and SH groups) and they suggest the same route. Similar results were obtained for aliphatic and aromatic selenuranes studied.Keywords: chalcogenes, computational study, cysteine proteases, enzyme inhibitors
Procedia PDF Downloads 299181 Pakistan’s Counterinsurgency Operations: A Case Study of Swat
Authors: Arshad Ali
Abstract:
The Taliban insurgency in Swat which started apparently as a social movement in 2004 transformed into an anti-Pakistan Islamist insurgency by joining hands with the Tehrik-e-Taliban Pakistan (TTP) upon its formation in 2007. It quickly spread beyond Swat by 2009 making Swat the second stronghold of TTP after FATA. It prompted the Pakistan military to launch a full-scale counterinsurgency military operation code named Rah-i-Rast to regain the control of Swat. Operation Rah-i-Rast was successful not only in restoring the writ of the State but more importantly in creating a consensus against the spread of Taliban insurgency in Pakistan at political, social and military levels. This operation became a test case for civilian government and military to seek for a sustainable solution combating the TTP insurgency in the north-west of Pakistan. This study analyzes why the counterinsurgency operation Rah-i-Rast was successful and why the previous ones came into failure. The study also explores factors which created consensus against the Taliban insurgency at political and social level as well as reasons which hindered such a consensual approach in the past. The study argues that the previous initiatives failed due to various factors including Pakistan army’s lack of comprehensive counterinsurgency model, weak political will and public support, and states negligence. Also, the initial counterinsurgency policies were ad-hoc in nature fluctuating between military operations and peace deals. After continuous failure, the military revisited its approach to counterinsurgency in the operation Rah-i-Rast. The security forces learnt from their past experiences and developed a pragmatic counterinsurgency model: ‘clear, hold, build, and transfer.’ The military also adopted the population-centric approach to provide security to the local people. This case Study of Swat evaluates the strengths and weaknesses of the Pakistan's counterinsurgency operations as well as peace agreements. It will analyze operation Rah-i-Rast in the light of David Galula’s model of counterinsurgency. Unlike existing literature, the study underscores the bottom up approach adopted by the Pakistan’s military and government by engaging the local population to sustain the post-operation stability in Swat. More specifically, the study emphasizes on the hybrid counterinsurgency model “clear, hold, and build and Transfer” in Swat.Keywords: Insurgency, Counterinsurgency, clear, hold, build, transfer
Procedia PDF Downloads 363180 ANSYS FLUENT Simulation of Natural Convection and Radiation in a Solar Enclosure
Authors: Sireetorn Kuharat, Anwar Beg
Abstract:
In this study, multi-mode heat transfer characteristics of spacecraft solar collectors are investigated computationally. Two-dimensional steady-state incompressible laminar Newtonian viscous convection-radiative heat transfer in a rectangular solar collector geometry. The ANSYS FLUENT finite volume code (version 17.2) is employed to simulate the thermo-fluid characteristics. Several radiative transfer models are employed which are available in the ANSYS workbench, including the classical Rosseland flux model and the more elegant P1 flux model. Mesh-independence tests are conducted. Validation of the simulations is conducted with a computational Harlow-Welch MAC (Marker and Cell) finite difference method and excellent correlation. The influence of aspect ratio, Prandtl number (Pr), Rayleigh number (Ra) and radiative flux model on temperature, isotherms, velocity, the pressure is evaluated and visualized in color plots. Additionally, the local convective heat flux is computed and solutions are compared with the MAC solver for various buoyancy effects (e.g. Ra = 10,000,000) achieving excellent agreement. The P1 model is shown to better predict the actual influence of solar radiative flux on thermal fluid behavior compared with the limited Rosseland model. With increasing Rayleigh numbers the hot zone emanating from the base of the collector is found to penetrate deeper into the collector and rises symmetrically dividing into two vortex regions with very high buoyancy effect (Ra >100,000). With increasing Prandtl number (three gas cases are examined respectively hydrogen gas mixture, air and ammonia gas) there is also a progressive incursion of the hot zone at the solar collector base higher into the solar collector space and simultaneously a greater asymmetric behavior of the dual isothermal zones. With increasing aspect ratio (wider base relative to the height of the solar collector geometry) there is a greater thermal convection pattern around the whole geometry, higher temperatures and the elimination of the cold upper zone associated with lower aspect ratio.Keywords: thermal convection, radiative heat transfer, solar collector, Rayleigh number
Procedia PDF Downloads 117179 Formal Development of Electronic Identity Card System Using Event-B
Authors: Tomokazu Nagata, Jawid Ahmad Baktash
Abstract:
The goal of this paper is to explore the use of formal methods for Electronic Identity Card System. Nowadays, one of the core research directions in a constantly growing distributed environment is the improvement of the communication process. The responsibility for proper verification becomes crucial. Formal methods can play an essential role in the development and testing of systems. The thesis presents two different methodologies for assessing correctness. Our first approach employs abstract interpretation techniques for creating a trace based model for Electronic Identity Card System. The model was used for building a semi decidable procedure for verifying the system model. We also developed the code for the eID System and can cover three parts login to system sending of Acknowledgment from user side, receiving of all information from server side and log out from system. The new concepts of impasse and spawned sessions that we introduced led our research to original statements about the intruder’s knowledge and eID system coding with respect to secrecy. Furthermore, we demonstrated that there is a bound on the number of sessions needed for the analysis of System.Electronic identity (eID) cards promise to supply a universal, nation-wide mechanism for user authentication. Most European countries have started to deploy eID for government and private sector applications. Are government-issued electronic ID cards the proper way to authenticate users of online services? We use the eID project as a showcase to discuss eID from an application perspective. The new eID card has interesting design features, it is contact-less, it aims to protect people’s privacy to the extent possible, and it supports cryptographically strong mutual authentication between users and services. Privacy features include support for pseudonymous authentication and per service controlled access to individual data items. The article discusses key concepts, the eID infrastructure, observed and expected problems, and open questions. The core technology seems ready for prime time and government projects deploy it to the masses. But application issues may hamper eID adoption for online applications.Keywords: eID, event-B, Pro-B, formal method, message passing
Procedia PDF Downloads 233178 Illness Roles and Coping Strategies in Aged Patients on Hemodialysis in Lahore
Authors: Zainab Bashir
Abstract:
There has been a lot of quantitative research on end-stage renal disease (ESRD), its implications, psychological effects and so on across the world, however little qualitative information is available on coping strategies and illness role adaptations specific to renal failure. This article attempts to learn about illness roles and coping strategies specific to aged ESRD patients on hemodialysis in Lahore. The patients were interviewed on a structured schedule and were asked questions on tasks and coping related to physical, psychological, and social consequences of renal failure. Standardised techniques and methods of grounded theory were used to analyse and code the information in this small-scale, in-depth study. An analysis of tasks faced by the ESRD patients and coping they employ to fulfill or overcome those tasks were done. This analysis was based on three different types of data: experiential accounts of ESRD patients with respect to tasks and strategies for coping, coping styles and illness roles typologies, and monographs of coping styles. In the information gathered using interviews with respondents, three styles of problem focused coping, and two styles of emotion focused coping could be identified. Problem focused coping included making physical adjustments to suit the requirements of the health condition, including dialysis and medical regime as integral part of patients’ lives, and altering future plans according to the course of the disease. Emotion focused coping included seeking help to manage stress/anxiety and resenting the disease condition and giving up. These coping styles are linked to the illness roles assigned to the respondents. In conclusion, there is no single formula to deal with the disease, however, some typologies can be established. In most of the cases discussed in the paper, adjustment to a regular dialysis routine, restriction in bodily function, inability to work and negative impacts on family life, especially spousal relationships have come to fore as common problems. A large part of coping with these problems had to do with mentally accepting the disease and carrying on despite. These cannot be seen as deviant adaptations to the depressive situation arising from renal failure, but more of patterned ways in which patients can approximate a close to normal lifestyle despite the terminal disease.Keywords: coping strategies, ESRD patients, hemodialysis, illness roles
Procedia PDF Downloads 122177 Comparison of Hydrogen and Electrification Perspectives in Decarbonizing the Transport Sector
Authors: Matteo Nicoli, Gianvito Colucci, Valeria Di Cosmo, Daniele Lerede, Laura Savoldi
Abstract:
The transport sector is currently responsible for approximately 1/3 of greenhouse gas emissions in Europe. In the wider context of achieving carbon neutrality of the global energy system, different alternatives are available to decarbonizethe transport sector. In particular, while electricity is already the most consumed energy commodity in rail transport, battery electric vehicles are one of the zero-emissions options on the market for road transportation. On the other hand, hydrogen-based fuel cell vehicles are available for road and non-road vehicles. The European Commission is strongly pushing toward the integration of hydrogen in the energy systems of European countries and its widespread adoption as an energy vector to achieve the Green Deal targets. Furthermore, the Italian government is defining hydrogen-related objectives with the publication of a dedicated Hydrogen Strategy. The adoption of energy system optimization models to study the possible penetration of alternative zero-emitting transport technologies gives the opportunity to perform an overall analysis of the effects that the development of innovative technologies has on the entire energy system and on the supply-side, devoted to the production of energy carriers such as hydrogen and electricity. Using an open-source modeling framework such as TEMOA, this work aims to compare the role of hydrogen and electric vehicles in the decarbonization of the transport sector. The analysis investigates the advantages and disadvantages of adopting the two options, from the economic point of view (costs associated with the two options) and the environmental one (looking at the emissions reduction perspectives). Moreover, an analysis on the profitability of the investments in hydrogen and electric vehicles will be performed. The study investigates the evolution of energy consumption and greenhouse gas emissions in different transportation modes (road, rail, navigation, and aviation) by detailed analysis of the full range of vehicles included in the techno-economic database used in the TEMOA model instance adopted for this work. The transparency of the analysis is guaranteed by the accessibility of the TEMOA models, based on an open-access source code and databases.Keywords: battery electric vehicles, decarbonization, energy system optimization models, fuel cell vehicles, hydrogen, open-source modeling, TEMOA, transport
Procedia PDF Downloads 110176 Intersections and Cultural Landscape Interpretation, in the Case of Ancient Messene in the Peloponnese
Authors: E. Maistrou, P. Themelis, D. Kosmopoulos, K. Boulougoura, A. M. Konidi, K. Moretti
Abstract:
InterArch is an ongoing research project that is running since September 2020 and aims to propose a digital application for the enhancement of the cultural landscape, which emphasizes the contribution of physical space and time in digital data organization. The research case study refers to Ancient Messene in the Peloponnese, one of the most important archaeological sites in Greece. The project integrates an interactive approach to the natural environment, aiming at a manifold sensory experience. It combines the physical space of the archaeological site with the digital space of archaeological and cultural data while, at the same time, it embraces storytelling processes by engaging an interdisciplinary approach that familiarizes the user to multiple semantic interpretations. The research project is co‐financed by the European Union and Greek national funds, through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH - CREATE – INNOVATE (project code: Τ2ΕΔΚ-01659). It involves mutual collaboration between academic and cultural institutions and the contribution of an IT applications development company. New technologies and the integration of digital data enable the implementation of non‐linear narratives related to the representational characteristics of the art of collage. Various images (photographs, drawings, etc.) and sounds (narrations, music, soundscapes, audio signs, etc.) could be presented according to our proposal through new semiotics of augmented and virtual reality technologies applied in touch screens and smartphones. Despite the fragmentation of tangible or intangible references, material landscape formations, including archaeological remains, constitute the common ground that can inspire cultural narratives in a process that unfolds personal perceptions and collective imaginaries. It is in this context that cultural landscape may be considered an indication of space and historical continuity. It is in this context that history could emerge, according to our proposal, not solely as a previous inscription but also as an actual happening. As a rhythm of occurrences suggesting mnemonic references and, moreover, evolving history projected on the contemporary ongoing cultural landscape.Keywords: cultural heritage, digital data, landscape, archaeological sites, visitors’ itineraries
Procedia PDF Downloads 78175 Thermal Analysis of Adsorption Refrigeration System Using Silicagel–Methanol Pair
Authors: Palash Soni, Vivek Kumar Gaba, Shubhankar Bhowmick, Bidyut Mazumdar
Abstract:
Refrigeration technology is a fast developing field at the present era since it has very wide application in both domestic and industrial areas. It started from the usage of simple ice coolers to store food stuffs to the present sophisticated cold storages along with other air conditioning system. A variety of techniques are used to bring down the temperature below the ambient. Adsorption refrigeration technology is a novel, advanced and promising technique developed in the past few decades. It gained attention due to its attractive property of exploiting unlimited natural sources like solar energy, geothermal energy or even waste heat recovery from plants or from the exhaust of locomotives to fulfill its energy need. This will reduce the exploitation of non-renewable resources and hence reduce pollution too. This work is aimed to develop a model for a solar adsorption refrigeration system and to simulate the same for different operating conditions. In this system, the mechanical compressor is replaced by a thermal compressor. The thermal compressor uses renewable energy such as solar energy and geothermal energy which makes it useful for those areas where electricity is not available. Refrigerants normally in use like chlorofluorocarbon/perfluorocarbon have harmful effects like ozone depletion and greenhouse warming. It is another advantage of adsorption systems that it can replace these refrigerants with less harmful natural refrigerants like water, methanol, ammonia, etc. Thus the double benefit of reduction in energy consumption and pollution can be achieved. A thermodynamic model was developed for the proposed adsorber, and a universal MATLAB code was used to simulate the model. Simulations were carried out for a different operating condition for the silicagel-methanol working pair. Various graphs are plotted between regeneration temperature, adsorption capacities, the coefficient of performance, desorption rate, specific cooling power, adsorption/desorption times and mass. The results proved that adsorption system could be installed successfully for refrigeration purpose as it has saving in terms of power and reduction in carbon emission even though the efficiency is comparatively less as compared to conventional systems. The model was tested for its compliance in a cold storage refrigeration with a cooling load of 12 TR.Keywords: adsorption, refrigeration, renewable energy, silicagel-methanol
Procedia PDF Downloads 205174 Ethics and Military Defections in Nonviolent Resistance Campaigns
Authors: Adi Levy
Abstract:
Military and security personnel defections during nonviolent resistance (NVR) campaigns are recognized as an effective way of undermining the regime’s power, but they also may generate moral dilemmas that contradict the moral standing of NVR tactics. NVR campaigns have been primarily praised for their adherence to moral and legal norms, yet some of NVR tactics raise serious ethical concerns. This paper focuses on NVR tactics that seek to promote defections and disobedience within military and security personnel to sustain their campaign. Academic literature regarding NVR tactics indicates that compared to violent forms of resistance, defections are more likely to occur when security forces confront nonviolent activists. Indeed, defections play a strategically fundamental role in nonviolent campaigns, particularly against authoritarian regimes, as it enables activists to undermine the regime’s central pillars of support. This study examines the events of the Arab Spring and discusses the ethical problems that arise in nonviolent activists’ promotion of defections and disobedience. The cases of Syria and Egypt suggest that the strategic promotion of defections and disobedience was significantly effective in sustaining the campaign. Yet, while such defections enhance nonviolent activists’ resilience, how they are promoted can be morally contentious and the consequences can be dire. Defections are encouraged by social, moral and emotional appeals that use the power disparities between unarmed civilians and powerful regimes to affect soldiers and security personnel’s process of decision-making. In what is commonly referred to as dilemma action, nonviolent activists deliberately entangle security forces in a moral dilemma that compels them to follow a moral code to protect unarmed civilians. In this way, activists sustain their struggle and even gain protection. Nonviolent activists are likely to be completely defeated when confronted with armed forces. Therefore they rely on the military and security personnel’s moral conscious of convincing them to refrain from using force against them. While this is effective, it also leaves soldiers and security forces exposed to the implications and punishments that might follow their disobedience or defection. As long as they remain nonviolent, activists enjoy civilian immunity despite using morally contentious tactics. But the severe implications brought upon defectors. As a result, demand a deep examination of this tactic’s moral permissibility and a discussion that assesses culpability for the moral implications of its application.Keywords: culpability, defections, nonviolence, permissibility
Procedia PDF Downloads 116173 Facilitated Massive Open Online Course (MOOC) Based Teacher Professional Development in Kazakhstan: Connectivism-Oriented Practices
Authors: A. Kalizhanova, T. Shelestova
Abstract:
Teacher professional development (TPD) in Kazakhstan has followed a fairly standard format for centuries, with teachers learning new information from a lecturer and being tested using multiple-choice questions. In the online world, self-access courses have become increasingly popular. Due to their extensive multimedia content, peer-reviewed assignments, adaptable class times, and instruction from top university faculty from across the world, massive open online courses (MOOCs) have found a home in Kazakhstan's system for lifelong learning. Recent studies indicate the limited use of connectivism-based tools such as discussion forums by Kazakhstani pre-service and in-service English teachers, whose professional interests are limited to obtaining certificates rather than enhancing their teaching abilities and exchanging knowledge with colleagues. This paper highlights the significance of connectivism-based tools and instruments, such as MOOCs, for the continuous professional development of pre- and in-service English teachers, facilitators' roles, and their strategies for enhancing trainees' conceptual knowledge within the MOOCs' curriculum and online learning skills. Reviewing the most pertinent papers on Connectivism Theory, facilitators' function in TPD, and connectivism-based tools, such as MOOCs, a code extraction method was utilized. Three experts, former active participants in a series of projects initiated across Kazakhstan to improve the efficacy of MOOCs, evaluated the excerpts and selected the most appropriate ones to propose the matrix of teacher professional competencies that can be acquired through MOOCs. In this paper, we'll look at some of the strategies employed by course instructors to boost their students' English skills and knowledge of course material, both inside and outside of the MOOC platform. Participants' interactive learning contributed to their language and subject conceptual knowledge and prepared them for peer-reviewed assignments in the MOOCs, and this approach of small group interaction was given to highlight the outcomes of participants' interactive learning. Both formal and informal continuing education institutions can use the findings of this study to support teachers in gaining experience with MOOCs and creating their own online courses.Keywords: connectivism-based tools, teacher professional development, massive open online courses, facilitators, Kazakhstani context
Procedia PDF Downloads 79172 Different Stages for the Creation of Electric Arc Plasma through Slow Rate Current Injection to Single Exploding Wire, by Simulation and Experiment
Authors: Ali Kadivar, Kaveh Niayesh
Abstract:
This work simulates the voltage drop and resistance of the explosion of copper wires of diameters 25, 40, and 100 µm surrounded by 1 bar nitrogen exposed to a 150 A current and before plasma formation. The absorption of electrical energy in an exploding wire is greatly diminished when the plasma is formed. This study shows the importance of considering radiation and heat conductivity in the accuracy of the circuit simulations. The radiation of the dense plasma formed on the wire surface is modeled with the Net Emission Coefficient (NEC) and is mixed with heat conductivity through PLASIMO® software. A time-transient code for analyzing wire explosions driven by a slow current rise rate is developed. It solves a circuit equation coupled with one-dimensional (1D) equations for the copper electrical conductivity as a function of its physical state and Net Emission Coefficient (NEC) radiation. At first, an initial voltage drop over the copper wire, current, and temperature distribution at the time of expansion is derived. The experiments have demonstrated that wires remain rather uniform lengthwise during the explosion and can be simulated utilizing 1D simulations. Data from the first stage are then used as the initial conditions of the second stage, in which a simplified 1D model for high-Mach-number flows is adopted to describe the expansion of the core. The current was carried by the vaporized wire material before it was dispersed in nitrogen by the shock wave. In the third stage, using a three-dimensional model of the test bench, the streamer threshold is estimated. Electrical breakdown voltage is calculated without solving a full-blown plasma model by integrating Townsend growth coefficients (TdGC) along electric field lines. BOLSIG⁺ and LAPLACE databases are used to calculate the TdGC at different mixture ratios of nitrogen/copper vapor. The simulations show both radiation and heat conductivity should be considered for an adequate description of wire resistance, and gaseous discharges start at lower voltages than expected due to ultraviolet radiation and the exploding shocks, which may have ionized the nitrogen.Keywords: exploding wire, Townsend breakdown mechanism, streamer, metal vapor, shock waves
Procedia PDF Downloads 86171 Seismic Retrofit of Tall Building Structure with Viscous, Visco-Elastic, Visco-Plastic Damper
Authors: Nicolas Bae, Theodore L. Karavasilis
Abstract:
Increasingly, a large number of new and existing tall buildings are required to improve their resilient performance against strong winds and earthquakes to minimize direct, as well as indirect damages to society. Those advent stationary functions of tall building structures in metropolitan regions can be severely hazardous, in socio-economic terms, which also increase the requirement of advanced seismic performance. To achieve these progressive requirements, the seismic reinforcement for some old, conventional buildings have become enormously costly. The methods of increasing the buildings’ resilience against wind or earthquake loads have also become more advanced. Up to now, vibration control devices, such as the passive damper system, is still regarded as an effective and an easy-to-install option, in improving the seismic resilience of buildings at affordable prices. The main purpose of this paper is to examine 1) the optimization of the shape of visco plastic brace damper (VPBD) system which is one of hybrid damper system so that it can maximize its energy dissipation capacity in tall buildings against wind and earthquake. 2) the verification of the seismic performance of the visco plastic brace damper system in tall buildings; up to forty-storey high steel frame buildings, by comparing the results of Non-Linear Response History Analysis (NLRHA), with and without a damper system. The most significant contribution of this research is to introduce the optimized hybrid damper system that is adequate for high rise buildings. The efficiency of this visco plastic brace damper system and the advantages of its use in tall buildings can be verified since tall buildings tend to be affected by wind load at its normal state and also by earthquake load after yielding of steel plates. The modeling of the prototype tall building will be conducted using the Opensees software. Three types of modeling were used to verify the performance of the damper (MRF, MRF with visco-elastic, MRF with visco-plastic model) 22-set seismic records used and the scaling procedure was followed according to the FEMA code. It is shown that MRF with viscous, visco-elastic damper, it is superior effective to reduce inelastic deformation such as roof displacement, maximum story drift, roof velocity compared to the MRF only.Keywords: tall steel building, seismic retrofit, viscous, viscoelastic damper, performance based design, resilience based design
Procedia PDF Downloads 189170 Generative Pre-Trained Transformers (GPT-3) and Their Impact on Higher Education
Authors: Sheelagh Heugh, Michael Upton, Kriya Kalidas, Stephen Breen
Abstract:
This article aims to create awareness of the opportunities and issues the artificial intelligence (AI) tool GPT-3 (Generative Pre-trained Transformer-3) brings to higher education. Technological disruptors have featured in higher education (HE) since Konrad Klaus developed the first functional programmable automatic digital computer. The flurry of technological advances, such as personal computers, smartphones, the world wide web, search engines, and artificial intelligence (AI), have regularly caused disruption and discourse across the educational landscape around harnessing the change for the good. Accepting AI influences are inevitable; we took mixed methods through participatory action research and evaluation approach. Joining HE communities, reviewing the literature, and conducting our own research around Chat GPT-3, we reviewed our institutional approach to changing our current practices and developing policy linked to assessments and the use of Chat GPT-3. We review the impact of GPT-3, a high-powered natural language processing (NLP) system first seen in 2020 on HE. Historically HE has flexed and adapted with each technological advancement, and the latest debates for educationalists are focusing on the issues around this version of AI which creates natural human language text from prompts and other forms that can generate code and images. This paper explores how Chat GPT-3 affects the current educational landscape: we debate current views around plagiarism, research misconduct, and the credibility of assessment and determine the tool's value in developing skills for the workplace and enhancing critical analysis skills. These questions led us to review our institutional policy and explore the effects on our current assessments and the development of new assessments. Conclusions: After exploring the pros and cons of Chat GTP-3, it is evident that this form of AI cannot be un-invented. Technology needs to be harnessed for positive outcomes in higher education. We have observed that materials developed through AI and potential effects on our development of future assessments and teaching methods. Materials developed through Chat GPT-3 can still aid student learning but lead to redeveloping our institutional policy around plagiarism and academic integrity.Keywords: artificial intelligence, Chat GPT-3, intellectual property, plagiarism, research misconduct
Procedia PDF Downloads 88169 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS
Procedia PDF Downloads 176168 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou
Abstract:
Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.Keywords: calibration and validation site, SWIR camera, in-flight radiometric calibration, dynamic range, response linearity
Procedia PDF Downloads 269167 Medical Ethics in the Hospital: Towards Quality Ethics Consultation
Authors: Dina Siniora, Jasia Baig
Abstract:
During the past few decades, the healthcare system has undergone profound changes in their healthcare decision-making competencies and moral aptitudes due to the vast advancement in technology, clinical skills, and scientific knowledge. Healthcare decision-making deals with morally contentious dilemmas ranging from illness, life and death judgments that require sensitivity and awareness towards the patient’s preferences while taking into consideration medicine’s abilities and boundaries. As the ever-evolving field of medicine continues to become more scientifically and morally multifarious; physicians and the hospital administrators increasingly rely on ethics committees to resolve problems that arise in everyday patient care. The role and latitude of responsibilities of ethics committees which includes being dispute intermediaries, moral analysts, policy educators, counselors, advocates, and reviewers; suggest the importance and effectiveness of a fully integrated committee. Despite achievements on Integrated Ethics and progress in standards and competencies, there is an imminent necessity for further improvement in quality within ethics consultation services in areas of credentialing, professionalism and standards of quality, as well as the quality of healthcare throughout the system. These concerns can be resolved first by collecting data about particular quality gaps and comprehend the level to which ethics committees are consistent with newly published ASBH quality standards. Policymakers should pursue improvement strategies that target both academic bioethics community and major stakeholders at hospitals, who directly influence ethics committees. This broader approach oriented towards education and intervention outcome in conjunction with preventive ethics to address disparities in quality on a systematic level. Adopting tools for improving competencies and processes within ethics consultation by implementing a credentialing process, upholding normative significance for the ASBH core competencies, advocating for professional Code of Ethics, and further clarifying the internal structures will improve productivity, patient satisfaction, and institutional integrity. This cannot be systemically achieved without a written certification exam for HCEC practitioners, credentialing and privileging HCEC practitioners at the hospital level, and accrediting HCEC services at the institutional level.Keywords: ethics consultation, hospital, medical ethics, quality
Procedia PDF Downloads 187166 Investigating Effects of Vehicle Speed and Road PSDs on Response of a 35-Ton Heavy Commercial Vehicle (HCV) Using Mathematical Modelling
Authors: Amal G. Kurian
Abstract:
The use of mathematical modeling has seen a considerable boost in recent times with the development of many advanced algorithms and mathematical modeling capabilities. The advantages this method has over other methods are that they are much closer to standard physics theories and thus represent a better theoretical model. They take lesser solving time and have the ability to change various parameters for optimization, which is a big advantage, especially in automotive industry. This thesis work focuses on a thorough investigation of the effects of vehicle speed and road roughness on a heavy commercial vehicle ride and structural dynamic responses. Since commercial vehicles are kept in operation continuously for longer periods of time, it is important to study effects of various physical conditions on the vehicle and its user. For this purpose, various experimental as well as simulation methodologies, are adopted ranging from experimental transfer path analysis to various road scenario simulations. To effectively investigate and eliminate several causes of unwanted responses, an efficient and robust technique is needed. Carrying forward this motivation, the present work focuses on the development of a mathematical model of a 4-axle configuration heavy commercial vehicle (HCV) capable of calculating responses of the vehicle on different road PSD inputs and vehicle speeds. Outputs from the model will include response transfer functions and PSDs and wheel forces experienced. A MATLAB code will be developed to implement the objectives in a robust and flexible manner which can be exploited further in a study of responses due to various suspension parameters, loading conditions as well as vehicle dimensions. The thesis work resulted in quantifying the effect of various physical conditions on ride comfort of the vehicle. An increase in discomfort is seen with velocity increase; also the effect of road profiles has a considerable effect on comfort of the driver. Details of dominant modes at each frequency are analysed and mentioned in work. The reduction in ride height or deflection of tire and suspension with loading along with load on each axle is analysed and it is seen that the front axle supports a greater portion of vehicle weight while more of payload weight comes on fourth and third axles. The deflection of the vehicle is seen to be well inside acceptable limits.Keywords: mathematical modeling, HCV, suspension, ride analysis
Procedia PDF Downloads 255165 Fabric-Reinforced Cementitious Matrix (FRCM)-Repaired Corroded Reinforced Concrete (RC) Beams under Monotonic and Fatigue Loads
Authors: Mohammed Elghazy, Ahmed El Refai, Usama Ebead, Antonio Nanni
Abstract:
Rehabilitating corrosion-damaged reinforced concrete (RC) structures has been accomplished using various techniques such as steel plating, external post-tensioning, and external bonding of fiber reinforced polymer (FRP) composites. This paper reports on the use of an innovative technique to strengthen corrosion-damaged RC structures using fabric-reinforced cementitious matrix (FRCM) composites. FRCM consists of dry-fiber fabric embedded in cement-based matrix. Twelve large-scale RC beams were constructed and tested in flexural monotonic and fatigue loads. Prior to testing, ten specimens were subjected to accelerated corrosion process for 140 days leading to an average mass loss in the tensile steel bars of 18.8 %. Corrosion was restricted to the main reinforcement located in the middle third of the beam span. Eight corroded specimens were repaired and strengthened while two virgin and two corroded-unrepaired/unstrengthened beams were used as benchmarks for comparison purpose. The test parameters included the FRCM materials (Carbon-FRCM, PBO-FRCM), the number of FRCM plies, the strengthening scheme, and the type of loading (monotonic and fatigue). The effects of the pervious parameters on the flexural response, the mode of failure, and the fatigue life were reported. Test results showed that corrosion reduced the yield and ultimate strength of the beams. The corroded-unrepaired specimen failed to meet the provisions of the ACI-318 code for crack width criteria. The use of FRCM significantly increased the ultimate strength of the corroded specimen by 21% and 65% more than that of the corroded-unrepaired specimen. Corrosion significantly decreased the fatigue life of the corroded-unrepaired beam by 77% of that of the virgin beam. The fatigue life of the FRCM repaired-corroded beams increased to 1.5 to 3.8 times that of the corroded-unrepaired beam but was lower than that of the virgin specimen. The specimens repaired with U-wrapped PBO-FRCM strips showed higher fatigue life than those repaired with the end-anchored bottom strips having similar number of PBO-FRCM-layers. PBO-FRCM was more effective than Carbon-FRCM in restoring the fatigue life of the corroded specimens.Keywords: corrosion, concrete, fabric-reinforced cementitious matrix (FRCM), fatigue, flexure, repair
Procedia PDF Downloads 295164 The Use of Random Set Method in Reliability Analysis of Deep Excavations
Authors: Arefeh Arabaninezhad, Ali Fakher
Abstract:
Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty
Procedia PDF Downloads 267163 Entropy in a Field of Emergence in an Aspect of Linguo-Culture
Authors: Nurvadi Albekov
Abstract:
Communicative situation is a basis, which designates potential models of ‘constructed forms’, a motivated basis of a text, for a text can be assumed as a product of the communicative situation. It is within the field of emergence the models of text, that can be potentially prognosticated in a certain communicative situation, are designated. Every text can be assumed as conceptual system structured on the base of certain communicative situation. However in the process of ‘structuring’ of a certain model of ‘conceptual system’ consciousness of a recipient is able act only within the border of the field of emergence for going out of this border indicates misunderstanding of the communicative situation. On the base of communicative situation we can witness the increment of meaning where the synergizing of the informative model of communication, formed by using of the invariant units of a language system, is a result of verbalization of the communicative situation. The potential of the models of a text, prognosticated within the field of emergence, also depends on the communicative situation. The conception ‘the field of emergence’ is interpreted as a unit of the language system, having poly-directed universal structure, implying the presence of the core, the center and the periphery, including different levels of means of a functioning system of language, both in terms of linguistic resources, and in terms of extra linguistic factors interaction of which results increment of a text. The conception ‘field of emergence’ is considered as the most promising in the analysis of texts: oral, written, printed and electronic. As a unit of the language system field of emergence has several properties that predict its use during the study of a text in different levels. This work is an attempt analysis of entropy in a text in the aspect of lingua-cultural code, prognosticated within the model of the field of emergence. The article describes the problem of entropy in the field of emergence, caused by influence of the extra-linguistic factors. The increasing of entropy is caused not only by the fact of intrusion of the language resources but by influence of the alien culture in a whole, and by appearance of non-typical for this very culture symbols in the field of emergence. The borrowing of alien lingua-cultural symbols into the lingua-culture of the author is a reason of increasing the entropy when constructing a text both in meaning and in structuring level. It is nothing but artificial formatting of lexical units that violate stylistic unity of a phrase. It is marked that one of the important characteristics descending the entropy in the field of emergence is a typical similarity of lexical and semantic resources of the different lingua-cultures in aspects of extra linguistic factors.Keywords: communicative situation, field of emergence, lingua-culture, entropy
Procedia PDF Downloads 360162 Unlocking Health Insights: Studying Data for Better Care
Authors: Valentina Marutyan
Abstract:
Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.Keywords: data mining, healthcare, big data, large amounts of data
Procedia PDF Downloads 75161 Albinism in the South African Workplace: Reasonable Accommodation of a Black Person Living in a White Skin
Authors: Laetitia Fourie
Abstract:
Dangerous myths and stereotypes contribute to the fact that persons living with albinism are amongst the most vulnerable groups in society. The prevalence of albinism varies around the world and the World Health Organization estimates that around 1 in 5000 people in Sub-Saharan Africa are affected by this genetic disorder. Persons who are living with the condition usually experience a lack of melanin in their skin, eyes and hair that results in possible physical impairments such as poor eyesight and skin cancers. Being affected by such disorders and consequently classified as an albino, give way for unequal treatment which ultimately requires safeguarding these persons against unfair discrimination - not only on the basis of their race and color (or lack thereof), but also on the basis of their disability. The Constitution of the Republic of South Africa provides that everyone is equal before the law and prohibits unfair discrimination on the grounds of race, color and disability. This right is given effect to by the Employment Equity Act, which strives to eliminate unfair discrimination on similar grounds within any employment policy or practice. An essential non-discrimination measure that can be implemented in the labor market to achieve equality is the duty of reasonable accommodation that rests upon employers. However, reasonable accommodation is only introduced as an affirmative action measure in order to provide equal employment opportunities to the identified designated groups who include black people (defined to include Indians, Chinese and Colored), women and people with disabilities. Even though this duty exists, South African law does not elaborate on the scope of the duty, except for a Disability Code, which does not hold the force of law. Furthermore, in respect of applying affirmative action measures to people with disabilities, the law does not elaborate on the meaning of disability. Considering that persons living with albinism will find it difficult to show that they are black or disabled in order to be acknowledged as part of the designated groups, their access to reasonable accommodation will be limited to a great extent. This paper will aim to illustrate to which extent South African law currently fails to implement its international obligations as a State Party to the Conventions of the United Nations, and how these failures should be corrected in order to serve the needs of all South Africans, including albinos.Keywords: albinism, disability, equality, South Africa, United Nations
Procedia PDF Downloads 188160 Reading against the Grain: Transcodifying Stimulus Meaning
Authors: Aba-Carina Pârlog
Abstract:
On translating, reading against the grain results in a wrong effect in the TL. Quine’s ocular irradiation plays an important part in the process of understanding and translating a text. The various types of textual radiation must be rendered by the translator by paying close attention to the types of field that produce it. The literary work must be seen as an indirect cause of an expressive effect in the TL that is supposed to be similar to the effect it has in the SL. If the adaptive transformative codes are so flexible that they encourage the translator to repeatedly leave out parts of the original work, then a subversive pattern emerges which changes the entire book. In this case, the translator is a writer per se who decides what goes in and out of the book, how the style is to be ciphered and what elements of ideology are to be highlighted. Figurative language must not be flattened for the sake of clarity or naturalness. The missing figurative elements make the translated text less interesting, less challenging and less vivid which reflects poorly on the writer. There is a close connection between style and the writer’s person. If the writer’s style is very much changed in a translation, the translation is useless as the original writer and his / her imaginative world can no longer be discovered. Then, a different writer appears and his / her creation surfaces. Changing meaning considered as a “negative shift” in translation defines one of the faulty transformative codes used by some translators. It is a dangerous tool which leads to adaptations that sometimes reflect the original less than the reader would wish to. It contradicts the very essence of the process of translation which is that of making a work available in a foreign language. Employing speculative aesthetics at the level of a text indicates the wish to create manipulative or subversive effects in the translated work. This is generally achieved by adding new words or connotations, creating new figures of speech or using explicitations. The irradiation patterns of the original work are neglected and the translator creates new meanings, implications, emphases and contexts. Again s/he turns into a new author who enjoys the freedom of expressing his / her ideas without the constraints of the original text. The stimulus meaning of a text is very important for a translator which is why reading against the grain is unadvisable during the process of translation. By paying attention to the waves of the SL input, a faithful literary work is produced which does not contradict general knowledge about foreign cultures and civilizations. Following personal common sense is essential in the field of translation as well as everywhere else.Keywords: stimulus meaning, substance of expression, transformative code, translation
Procedia PDF Downloads 445159 Assessing the Feasibility of Italian Hydrogen Targets with the Open-Source Energy System Optimization Model TEMOA - Italy
Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi
Abstract:
Hydrogen is expected to become a game changer in the energy transition, especially enabling sector coupling possibilities and the decarbonization of hard-to-abate end-uses. The Italian National Recovery and Resilience Plan identifies hydrogen as one of the key elements of the ecologic transition to meet international decarbonization objectives, also including it in several pilot projects for the early development in Italy. This matches the European energy strategy, which aims to make hydrogen a leading energy carrier of the future, setting ambitious goals to be accomplished by 2030. The huge efforts needed to achieve the announced targets require to carefully investigate of their feasibility in terms of economic expenditures and technical aspects. In order to quantitatively assess the hydrogen potential within the Italian context and the feasibility of the planned investments and projects, this work uses the TEMOA-Italy energy system model to study pathways to meet the strict objectives above cited. The possible hydrogen development has been studied both in the supply-side and demand-side of the energy system, also including storage options and distribution chains. The assessment comprehends alternative hydrogen production technologies involved in a competition market, reflecting the several possible investments declined by the Italian National Recovery and Resilience Plan to boost the development and spread of this infrastructure, including the sector coupling potential with natural gas through the currently existing infrastructure and CO2 capture for the production of synfuels. On the other hand, the hydrogen end-uses phase covers a wide range of consumption alternatives, from fuel-cell vehicles, for which both road and non-road transport categories are considered, to steel, and chemical industries uses and cogeneration for residential and commercial buildings. The model includes both high and low TRL technologies in order to provide a consistent outcome for the future decades as it does for the present day, and since it is developed through the use of an open-source code instance and database, transparency and accessibility are fully granted.Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA
Procedia PDF Downloads 100