Search results for: visual media and computer network etc
309 Instant Data-Driven Robotics Fabrication of Light-Transmitting Ceramics: A Responsive Computational Modeling Workflow
Authors: Shunyi Yang, Jingjing Yan, Siyu Dong, Xiangguo Cui
Abstract:
Current architectural façade design practices incorporate various daylighting and solar radiation analysis methods. These emphasize the impact of geometry on façade design. There is scope to extend this knowledge into methods that address material translucency, porosity, and form. Such approaches can also achieve these conditions through adaptive robotic manufacturing approaches that exploit material dynamics within the design, and alleviate fabrication waste from molds, ultimately accelerating the autonomous manufacturing system. Besides analyzing the environmental solar radiant in building facade design, there is also a vacancy research area of how lighting effects can be precisely controlled by engaging the instant real-time data-driven robot control and manipulating the material properties. Ceramics carries a wide range of transmittance and deformation potentials for robotics control with the research of its material property. This paper presents one semi-autonomous system that engages with real-time data-driven robotics control, hardware kit design, environmental building studies, human interaction, and exploratory research and experiments. Our objectives are to investigate the relationship between different clay bodies or ceramics’ physio-material properties and their transmittance; to explore the feedback system of instant lighting data in robotic fabrication to achieve precise lighting effect; to design the sufficient end effector and robot behaviors for different stages of deformation. We experiment with architectural clay, as the material of the façade that is potentially translucent at a certain stage can respond to light. Studying the relationship between form, material properties, and porosity can help create different interior and exterior light effects and provide façade solutions for specific architectural functions. The key idea is to maximize the utilization of in-progress robotics fabrication and ceramics materiality to create a highly integrated autonomous system for lighting facade design and manufacture.Keywords: light transmittance, data-driven fabrication, computational design, computer vision, gamification for manufacturing
Procedia PDF Downloads 123308 Seroepidemiological Study of Toxoplasma gondii Infection in Women of Child-Bearing Age in Communities in Osun State, Nigeria
Authors: Olarinde Olaniran, Oluyomi A. Sowemimo
Abstract:
Toxoplasmosis is frequently misdiagnosed or underdiagnosed, and it is the third most common cause of hospitalization due to food-borne infection. Intra-uterine infection with Toxoplasma gondii due to active parasitaemia during pregnancy can cause severe and often fatal cerebral damage, abortion, and stillbirth of a fetus. The aim of the study was to investigate the prevalence of T. gondii infection in women of childbearing age in selected communities of Osun State with a view to determining the risk factors which predispose to the T. gondii infection. Five (5) ml of blood was collected by venopuncture into a plain blood collection tube by a medical laboratory scientist. Serum samples were separated by centrifuging the blood samples at 3000 rpm for 5 mins. The sera were collected with Eppendorf tubes and stored at -20°C analysis for the presence of IgG and IgM antibodies against T. gondii by commercially available enzyme-linked immunosorbent assay (ELISA) kit (Demeditec Diagnostics GmbH, Germany) conducted according to the manufacturer’s instructions. The optical densities of wells were measured by a photometer at a wavelength of 450 nm. Data collected were analysed using appropriate computer software. The overall seroprevalence of T. gondii among the women of child-bearing age in selected seven communities in Osun state was 76.3%. Out of 76.3% positive for Toxoplasma gondii infection, 70.0% were positive for anti- T. gondii IgG, and 32.3% were positive for IgM, and 26.7% for both IgG and IgM. The prevalence of T. gondii was lowest (58.9%) among women from Ile Ife, a peri-urban community, and highest (100%) in women residing in Alajue, a rural community. The prevalence of infection was significantly higher (P= 0.000) among Islamic women (87.5%) than in Christian women (70.8%). The highest prevalence (86.3%) was recorded in women with primary education, while the lowest (61.2%) was recorded in women with tertiary education (p =0.016). The highest prevalence (79.7%) was recorded in women that reside in rural areas, and the lowest (70.1%) was recorded in women that reside in peri-urban area (p=0.025). The prevalence of T. gondii infection was highest (81.4%) in women with one miscarriage, while the prevalence was lowest in women with no miscarriages (75.9%). The age of the women (p=0.042), Islamic religion (p=0.001), the residence of the women (p=0.001), and water source were all positively associated with T. gondii infection. The study concluded that there was a high seroprevalence of T. gondii recorded among women of child-bearing age in the study area. Hence, there is a need for health education and create awareness of the disease and its transmission to women of reproductive age group in general and pregnant women in particular to reduce the risk of T. gondii in pregnant women.Keywords: seroepidemiology, Toxoplasma gondii, women, child-bearing, age, communities, Ile -Ife, Nigeria
Procedia PDF Downloads 177307 Microsimulation of Potential Crashes as a Road Safety Indicator
Authors: Vittorio Astarita, Giuseppe Guido, Vincenzo Pasquale Giofre, Alessandro Vitale
Abstract:
Traffic microsimulation has been used extensively to evaluate consequences of different traffic planning and control policies in terms of travel time delays, queues, pollutant emissions, and every other common measured performance while at the same time traffic safety has not been considered in common traffic microsimulation packages as a measure of performance for different traffic scenarios. Vehicle conflict techniques that were introduced at intersections in the early traffic researches carried out at the General Motor laboratory in the USA and in the Swedish traffic conflict manual have been applied to vehicles trajectories simulated in microscopic traffic simulators. The concept is that microsimulation can be used as a base for calculating the number of conflicts that will define the safety level of a traffic scenario. This allows engineers to identify unsafe road traffic maneuvers and helps in finding the right countermeasures that can improve safety. Unfortunately, most commonly used indicators do not consider conflicts between single vehicles and roadside obstacles and barriers. A great number of vehicle crashes take place with roadside objects or obstacles. Only some recent proposed indicators have been trying to address this issue. This paper introduces a new procedure based on the simulation of potential crash events for the evaluation of safety levels in microsimulation traffic scenarios, which takes into account also potential crashes with roadside objects and barriers. The procedure can be used to define new conflict indicators. The proposed simulation procedure generates with the random perturbation of vehicle trajectories a set of potential crashes which can be evaluated accurately in terms of DeltaV, the energy of the impact, and/or expected number of injuries or casualties. The procedure can also be applied to real trajectories giving birth to new surrogate safety performance indicators, which can be considered as “simulation-based”. The methodology and a specific safety performance indicator are described and applied to a simulated test traffic scenario. Results indicate that the procedure is able to evaluate safety levels both at the intersection level and in the presence of roadside obstacles. The procedure produces results that are expressed in the same unity of measure for both vehicle to vehicle and vehicle to roadside object conflicts. The total energy for a square meter of all generated crash can be used and is shown on the map, for the test network, after the application of a threshold to evidence the most dangerous points. Without any detailed calibration of the microsimulation model and without any calibration of the parameters of the procedure (standard values have been used), it is possible to identify dangerous points. A preliminary sensitivity analysis has shown that results are not dependent on the different energy thresholds and different parameters of the procedure. This paper introduces a specific new procedure and the implementation in the form of a software package that is able to assess road safety, also considering potential conflicts with roadside objects. Some of the principles that are at the base of this specific model are discussed. The procedure can be applied on common microsimulation packages once vehicle trajectories and the positions of roadside barriers and obstacles are known. The procedure has many calibration parameters and research efforts will have to be devoted to make confrontations with real crash data in order to obtain the best parameters that have the potential of giving an accurate evaluation of the risk of any traffic scenario.Keywords: road safety, traffic, traffic safety, traffic simulation
Procedia PDF Downloads 135306 Implementing Online Blogging in Specific Context Using Process-Genre Writing Approach in Saudi EFL Writing Class to Improve Writing Learning and Teaching Quality
Authors: Sultan Samah A. Alenezi
Abstract:
Many EFL teachers are eager to look into the best way to suit the needs of their students in EFL writing courses. Numerous studies suggest that online blogging may present a social interaction opportunity for EFL writing students. Additionally, it can foster peer collaboration and social support in the form of scaffolding, which, when viewed from the perspective of socio-cultural theory, can boost social support and foster the development of students' writing abilities. This idea is based on Vygotsky's theories, which emphasize how collaboration and social interaction facilitate effective learning. In Saudi Arabia, students are taught to write using conventional methods that are totally under the teacher's control. Without any peer contact or cooperation, students are spoon-fed in a passive environment. This study included the cognitive processes of the genre-process approach into the EFL writing classroom to facilitate the use of internet blogging in EFL writing education. Thirty second-year undergraduate students from the Department of Languages and Translation at a Saudi college participated in this study. This study employed an action research project that blended qualitative and quantitative methodologies to comprehend Saudi students' perceptions and experiences with internet blogging in an EFL process-genre writing classroom. It also looked at the advantages and challenges people faced when blogging. They included a poll, interviews, and blog postings made by students. The intervention's outcomes showed that merging genre-process procedures with blogging was a successful tactic, and the Saudi students' perceptions of this method of online blogging for EFL writing were quite positive. The socio-cultural theory constructs that Vygotsky advocates, such as scaffolding, collaboration, and social interaction, were also improved by blogging. These elements demonstrated the improvement in the students' written, reading, social, and collaborative thinking skills, as well as their positive attitudes toward English-language writing. But the students encountered a variety of problems that made blogging difficult for them. These problems ranged from technological ones, such sluggish internet connections, to learner inadequacies, like a lack of computer know-how and ineffective time management.Keywords: blogging, process-gnere approach, saudi learenrs, writing quality
Procedia PDF Downloads 120305 Generative Pre-Trained Transformers (GPT-3) and Their Impact on Higher Education
Authors: Sheelagh Heugh, Michael Upton, Kriya Kalidas, Stephen Breen
Abstract:
This article aims to create awareness of the opportunities and issues the artificial intelligence (AI) tool GPT-3 (Generative Pre-trained Transformer-3) brings to higher education. Technological disruptors have featured in higher education (HE) since Konrad Klaus developed the first functional programmable automatic digital computer. The flurry of technological advances, such as personal computers, smartphones, the world wide web, search engines, and artificial intelligence (AI), have regularly caused disruption and discourse across the educational landscape around harnessing the change for the good. Accepting AI influences are inevitable; we took mixed methods through participatory action research and evaluation approach. Joining HE communities, reviewing the literature, and conducting our own research around Chat GPT-3, we reviewed our institutional approach to changing our current practices and developing policy linked to assessments and the use of Chat GPT-3. We review the impact of GPT-3, a high-powered natural language processing (NLP) system first seen in 2020 on HE. Historically HE has flexed and adapted with each technological advancement, and the latest debates for educationalists are focusing on the issues around this version of AI which creates natural human language text from prompts and other forms that can generate code and images. This paper explores how Chat GPT-3 affects the current educational landscape: we debate current views around plagiarism, research misconduct, and the credibility of assessment and determine the tool's value in developing skills for the workplace and enhancing critical analysis skills. These questions led us to review our institutional policy and explore the effects on our current assessments and the development of new assessments. Conclusions: After exploring the pros and cons of Chat GTP-3, it is evident that this form of AI cannot be un-invented. Technology needs to be harnessed for positive outcomes in higher education. We have observed that materials developed through AI and potential effects on our development of future assessments and teaching methods. Materials developed through Chat GPT-3 can still aid student learning but lead to redeveloping our institutional policy around plagiarism and academic integrity.Keywords: artificial intelligence, Chat GPT-3, intellectual property, plagiarism, research misconduct
Procedia PDF Downloads 89304 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting
Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade
Abstract:
The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit
Procedia PDF Downloads 165303 Seismo-Volcanic Hazards in Great Ararat Region, Eastern Turkey
Authors: Mehmet Salih Bayraktutan, Emre Tokmak
Abstract:
Great Ararat Volcano is the highest peak in South Caucasus Volcanic Plateau. Uplifted by Quaternary basaltic pyroclastic and lava flows. Numerous volcanic cones formed along with the tensional fractures under N-S compressional geodynamic framework. Basaltic flows have fresh surface morphology give ages of 650-680 K years. Hyperstene andesites constitute a major mass of Greater Ararat gives ages of 450-490 K years. During the early eruption period, predominately pyroclastics, cinder, lapilly-ash volcanic bombs were extruded. Third-period eruptions dominantly basaltic lava flows. Andesitic domes aligned along with the NW-SE striking fractures. Hyalo basalt and hornblende basaltic lavas are the latest lava eruptions. Hyalo-basaltic eruptions occurred via parasitic cones distributed far from the center. Parasitic cones are most common at the foot of Mount covered by recent NW flowing basaltic lava. Some of the cones are distributed on a circular pattern. One of the most hazardous disasters recorded in Eastern Turkey was July 1840 Cehennem Canyon Flood. Volcanic activities seismically triggered resulted in melting of glacier cap, mixed with ash and pyroclastics, flowed down along the Valley. Mud rich Slush urged catastrophically northwards, crossed Ars River and damned Surmeli Basin, forming reservoir behind. Ararat volcanoes are located on NW-SE striking Agri Fault Zone. Right lateral extensional faults, along which a series of andesitic domes formed. Great Ararat, in general strato-type volcano. This huge structure, developed in two main parts with different topographic and morphological features. The large lower base covers a widespread area composed of predominantly pyroclastics, ignimbrites, aglomerates, thick pumice, perlite deposits. Approximately 1/3 of the Crest by height formed of this basement. And 2/3 of the upper part with a conic- shape composed of basaltic lava flows. The active tectonic structure consists of three different patterns. The first network is radially distributed fractures formed during the last stage of lava eruptions. The second group of active faults striking in NW direction, and continue in N30W strike, formes Igdir Fault Zone. The third set of faults, dipping in the northwest with 75-80 degrees, strikes NE- SW across the whole Mount, slicing Great Ararat into four segments. In the upper stage of Cehennem Canyon, this set cutting volcanic layers caused numerous Waterfalls, Rock Avalanches, Mud Flows along the canyon, threatens the Village of Yanidogan, at the apex of flood deposits. Great Ararat Region has high seismo-tectonic risk and by occurrence frequency and magnitude, which caused in history caused heavy disasters, at villages surrounding the Ararat Basement.Keywords: Eastern Turkey, geohazard, great ararat volcano, seismo-tectonic features
Procedia PDF Downloads 181302 GIS Technology for Environmentally Polluted Sites with Innovative Process to Improve the Quality and Assesses the Environmental Impact Assessment (EIA)
Authors: Hamad Almebayedh, Chuxia Lin, Yu wang
Abstract:
The environmental impact assessment (EIA) must be improved, assessed, and quality checked for human and environmental health and safety. Soil contamination is expanding, and sites and soil remediation activities proceeding around the word which simplifies the answer “quality soil characterization” will lead to “quality EIA” to illuminate the contamination level and extent and reveal the unknown for the way forward to remediate, countifying, containing, minimizing and eliminating the environmental damage. Spatial interpolation methods play a significant role in decision making, planning remediation strategies, environmental management, and risk assessment, as it provides essential elements towards site characterization, which need to be informed into the EIA. The Innovative 3D soil mapping and soil characterization technology presented in this research paper reveal the unknown information and the extent of the contaminated soil in specific and enhance soil characterization information in general which will be reflected in improving the information provided in developing the EIA related to specific sites. The foremost aims of this research paper are to present novel 3D mapping technology to quality and cost-effectively characterize and estimate the distribution of key soil characteristics in contaminated sites and develop Innovative process/procedure “assessment measures” for EIA quality and assessment. The contaminated site and field investigation was conducted by innovative 3D mapping technology to characterize the composition of petroleum hydrocarbons contaminated soils in a decommissioned oilfield waste pit in Kuwait. The results show the depth and extent of the contamination, which has been interred into a developed assessment process and procedure for the EIA quality review checklist to enhance the EIA and drive remediation and risk assessment strategies. We have concluded that to minimize the possible adverse environmental impacts on the investigated site in Kuwait, the soil-capping approach may be sufficient and may represent a cost-effective management option as the environmental risk from the contaminated soils is considered to be relatively low. This research paper adopts a multi-method approach involving reviewing the existing literature related to the research area, case studies, and computer simulation.Keywords: quality EIA, spatial interpolation, soil characterization, contaminated site
Procedia PDF Downloads 88301 International Coffee Trade in Solidarity with the Zapatista Rebellion: Anthropological Perspectives on Commercial Ethics within Political Antagonistic Movements
Authors: Miria Gambardella
Abstract:
The influence of solidarity demonstrations towards the Zapatista National Liberation Army has been constantly present over the years, both locally and internationally, guaranteeing visibility to the cause, shaping the movement’s choices, and influencing its hopes of impact worldwide. Most of the coffee produced by the autonomous cooperatives from Chiapas is exported, therefore making coffee trade the main income from international solidarity networks. The question arises about the implications of the relations established between the communities in resistance in Southeastern Mexico and international solidarity movements, specifically on the strategies adopted to conciliate army's demands for autonomy and economic asymmetries between Zapatista cooperatives producing coffee and European collectives who hold purchasing power. In order to deepen the inquiry on those topics, a year-long multi-site investigation was carried out. The first six months of fieldwork were based in Barcelona, where Zapatista coffee was first traded in Spain and where one of the historical and most important European solidarity groups can be found. The last six months of fieldwork were carried out directly in Chiapas, in contact with coffee producers, Zapatista political authorities, international activists as well as vendors, and the rest of the network implicated in coffee production, roasting, and sale. The investigation was based on qualitative research methods, including participatory observation, focus groups, and semi-structured interviews. The analysis did not only focus on retracing the steps of the market chain as if it could be considered a linear and unilateral process, but it rather aimed at exploring actors’ reciprocal perceptions, roles, and dynamics of power. Demonstrations of solidarity and the money circulation they imply aim at changing the system in place and building alternatives, among other things, on the economic level. This work analyzes the formulation of discourse and the organization of solidarity activities that aim at building opportunities for action within a highly politicized economic sphere to which access must be regularly legitimized. The meaning conveyed by coffee is constructed on a symbolic level by the attribution of moral criteria to transactions. The latter participate in the construction of imaginaries that circulate through solidarity movements with the Zapatista rebellion. Commercial exchanges linked to solidarity networks turned out to represent much more than monetary transactions. The social, cultural, and political spheres are invested by ethics, which penetrates all aspects of militant action. It is at this level that the boundaries of different collective actors connect, contaminating each other: merely following the money flow would have been limiting in order to account for a reality within which imaginary is one of the main currencies. The notions of “trust”, “dignity” and “reciprocity” are repeatedly mobilized to negotiate discontinuous and multidirectional flows in the attempt to balance and justify commercial relations in a politicized context that characterizes its own identity through demonizing “market economy” and its dehumanizing powers.Keywords: coffee trade, economic anthropology, international cooperation, Zapatista National Liberation Army
Procedia PDF Downloads 86300 Complex Decision Rules in Quality Assurance Processes for Quick Service Restaurant Industry: Human Factors Determining Acceptability
Authors: Brandon Takahashi, Marielle Hanley, Gerry Hanley
Abstract:
The large-scale quick-service restaurant industry is a complex business to manage optimally. With over 40 suppliers providing different ingredients for food preparation and thousands of restaurants serving over 50 unique food offerings across a wide range of regions, the company must implement a quality assurance process. Businesses want to deliver quality food efficiently, reliably, and successfully at a low cost that the public wants to buy. They also want to make sure that their food offerings are never unsafe to eat or of poor quality. A good reputation (and profitable business) developed over the years can be gone in an instant if customers fall ill eating your food. Poor quality also results in food waste, and the cost of corrective actions is compounded by the reduction in revenue. Product compliance evaluation assesses if the supplier’s ingredients are within compliance with the specifications of several attributes (physical, chemical, organoleptic) that a company will test to ensure that a quality, safe to eat food is given to the consumer and will deliver the same eating experience in all parts of the country. The technical component of the evaluation includes the chemical and physical tests that produce numerical results that relate to shelf-life, food safety, and organoleptic qualities. The psychological component of the evaluation includes organoleptic, which is acting on or involving the use of the sense organs. The rubric for product compliance evaluation has four levels: (1) Ideal: Meeting or exceeding all technical (physical and chemical), organoleptic, & psychological specifications. (2) Deviation from ideal but no impact on quality: Not meeting or exceeding some technical and organoleptic/psychological specifications without impact on consumer quality and meeting all food safety requirements (3) Acceptable: Not meeting or exceeding some technical and organoleptic/psychological specifications resulting in reduction of consumer quality but not enough to lessen demand and meeting all food safety requirements (4) Unacceptable: Not meeting food safety requirements, independent of meeting technical and organoleptic specifications or meeting all food safety requirements but product quality results in consumer rejection of food offering. Sampling of products and consumer tastings within the distribution network is a second critical element of the quality assurance process and are the data sources for the statistical analyses. Each finding is not independently assessed with the rubric. For example, the chemical data will be used to back up/support any inferences on the sensory profiles of the ingredients. Certain flavor profiles may not be as apparent when mixed with other ingredients, which leads to weighing specifications differentially in the acceptability decision. Quality assurance processes are essential to achieve that balance of quality and profitability by making sure the food is safe and tastes good but identifying and remediating product quality issues before they hit the stores. Comprehensive quality assurance procedures implement human factors methodologies, and this report provides recommendations for systemic application of quality assurance processes for quick service restaurant services. This case study will review the complex decision rubric and evaluate processes to ensure the right balance of cost, quality, and safety is achieved.Keywords: decision making, food safety, organoleptics, product compliance, quality assurance
Procedia PDF Downloads 188299 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality
Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan
Abstract:
Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application
Procedia PDF Downloads 72298 Femicide: The Political and Social Blind Spot in the Legal and Welfare State of Germany
Authors: Kristina F. Wolff
Abstract:
Background: In the Federal Republic of Germany, violence against women is deeply embedded in society. Germany is, as of March 2020, the most populous member state of the European Union with 83.2 million inhabitants and, although more than half of its inhabitants are women, gender equality was not certified in the Basic Law until 1957. Women have only been allowed to enter paid employment without their husband's consent since 1977 and have marital rape prosecuted only since 1997. While the lack of equality between men and women is named in the preamble of the Istanbul Convention as the cause of gender-specific, structural, traditional violence against women, Germany continues to sink on the latest Gender Equality Index. According to Police Crime Statistics (PCS), women are significantly more often victims of lethal violence, emanating from men than vice versa. The PCS, which, since 2015, also collects gender-specific data on violent crimes, is kept by the Federal Criminal Police Office, but without taking into account the relevant criteria for targeted prevention, such as the history of violence of the perpetrator/killer, weapon, motivation, etc.. Institutions such as EIGE or the World Health Organization have been asking Germany for years in vain for comparable data on violence against women in order to gain an overview or to develop cross-border synergies. The PCS are the only official data collection on violence against women. All players involved are depend on this data set, which is published only in November of the following year and is thus already completely outdated at the time of publication. In order to combat German femicides causally, purposefully and efficiently, evidence-based data was urgently needed. Methodology: Beginning in January 2019, a database was set up that now tracks more than 600 German femicides, broken down by more than 100 crime-related individual criteria, which in turn go far beyond the official PCS. These data are evaluated on the one hand by daily media research, and on the other hand by case-specific inquiries at the respective public prosecutor's offices and courts nationwide. This quantitative long-term study covers domestic violence as well as a variety of different types of gender-specific, lethal violence, including, for example, femicides committed by German citizens abroad. Additionallyalcohol/ narcotic and/or drug abuse, infanticides and the gender aspect in the judiciary are also considered. Results: Since November 2020, evidence-based data from a scientific survey have been available for the first time in Germany, supplementing the rudimentary picture of reality provided by PCS with a number of relevant parameters. The most important goal of the study is to identify "red flags" that enable general preventive awareness, that serve increasingly precise hazard assessment in acute hazard situations, and from which concrete instructions for action can be identified. Already at a very early stage of the study it could be proven that in more than half of all femicides with a sexual perpetrator/victim constellation there was an age difference of five years or more. Summary: Without reliable data and an understanding of the nature and extent, cause and effect, it is impossible to sustainably curb violence against girls and women, which increasingly often culminates in femicide. In Germany, valid data from a scientific survey has been available for the first time since November 2020, supplementing the rudimentary reality picture of the official and, to date, sole crime statistics with several relevant parameters. The basic research provides insights into geo-concentration, monthly peaks and the modus operandi of male violent excesses. A significant increase of child homicides in the course of femicides and/or child homicides as an instrument of violence against the mother could be proven as well as a danger of affected persons due to an age difference of five years and more. In view of the steadily increasing wave of violence against women, these study results are an eminent contribution to the preventive containment of German femicides.Keywords: femicide, violence against women, gender specific data, rule Of law, Istanbul convention, gender equality, gender based violence
Procedia PDF Downloads 89297 Femoral Neck Anteversion and Neck-Shaft Angles: Determination and Their Clinical Implications in Fetuses of Different Gestational Ages
Authors: Vrinda Hari Ankolekar, Anne D. Souza, Mamatha Hosapatna
Abstract:
Introduction: Precise anatomical assessment of femoral neck anteversion (FNA) and the neck shaft angles (NSA) would be essential in diagnosing the pathological conditions involving hip joint and its ligaments. FNA of greater than 20 degrees is considered excessive femoral anteversion, whereas a torsion angle of fewer than 10 degrees is considered femoral retroversion. Excessive femoral torsion is not uncommon and has been associated with certain neurologic and orthopedic conditions. The enlargement and maturation of the hip joint increases at the 20th week of gestation and the NSA ranges from 135- 140◦ at birth. Material and methods: 48 femurs were tagged according to the GA and two photographs for each femur were taken using Nikon digital camera. Each femur was kept on a horizontal hard desk and end on an image of the upper end was taken for the estimation of FNA and a photograph in a perpendicular plane was taken to calculate the NSA. The images were transferred to the computer and were stored in TIFF format. Microsoft Paint software was used to mark the points and Image J software was used to calculate the angles digitally. 1. Calculation of FNA: The midpoint of the femoral head and the neck were marked and a line was drawn joining these two points. The angle made by this line with the horizontal plane was measured as FNA. 2. Calculation of NSA: The midpoint of the femoral head and the neck were marked and a line was drawn joining these two points. A vertical line was drawn passing through the tip of the greater trochanter to the inter-condylar notch. The angle formed by these lines was calculated as NSA. Results: The paired t-test for the inter-observer variability showed no significant difference between the values of two observers. (FNA: t=-1.06 and p=0.31; NSA: t=-0.09 and p=0.9). The FNA ranged from 17.08º to 33.97 º on right and 17.32 º to 45.08 º on left. The NSA ranged from 139.33 º to 124.91 º on right and 143.98 º to 123.8 º on left. Unpaired t-test was applied to compare the mean angles between the second and third trimesters which did not show any statistical significance. This shows that the FNA and NSA of femur did not vary significantly during the third trimester. The FNA and NSA were correlated with the GA using Pearson’s correlation. FNA appeared to increase with the GA (r=0.5) but the increase was not statistically significant. A decrease in the NSA was also noted with the GA (r=-0.3) which was also statistically not significant. Conclusion: The present study evaluates the FNA and NSA of the femur in fetuses and correlates their development with the GA during second and third trimesters. The FNA and NSA did not vary significantly during the third trimester.Keywords: anteversion, coxa antetorsa, femoral torsion, femur neck shaft angle
Procedia PDF Downloads 319296 Inappropriate Effects Which the Use of Computer and Playing Video Games Have on Young People
Authors: Maja Ruzic-Baf, Mirjana Radetic-Paic
Abstract:
The use of computers by children has many positive aspects, including the development of memory, learning methods, problem-solving skills and the feeling of one’s own competence and self-confidence. Playing on line video games can encourage hanging out with peers having similar interests as well as communication; it develops coordination, spatial relations and presentation. On the other hand, the Internet enables quick access to different information and the exchange of experiences. How kids use computers and what the negative effects of this can be depends on various factors. ICT has improved and become easy to get for everyone. In the past 12 years so many video games has been made even to that level that some of them are free to play. Young people, even some adults, had simply start to forget about the real outside world because in that other, digital world, they have found something that makes them feal more worthy as a man. This article present the use of ICT, forms of behavior and addictions to on line video games. The use of computers by children has many positive aspects, including the development of memory, learning methods, problem-solving skills and the feeling of one’s own competence and self-confidence. Playing on line video games can encourage hanging out with peers having similar interests as well as communication; it develops coordination, spatial relations and presentation. On the other hand, the Internet enables quick access to different information and the exchange of experiences. How kids use computers and what the negative effects of this can be depends on various factors. ICT has improved and become easy to get for everyone. In the past 12 years so many video games has been made even to that level that some of them are free to play. Young people, even some adults, had simply start to forget about the real outside world because in that other, digital world, they have found something that makes them feal more worthy as a man. This article present the use of ICT, forms of behavior and addictions to on line video games.Keywords: addiction to video games, behaviour, ICT, young people
Procedia PDF Downloads 545295 The Importance of Value Added Services Provided by Science and Technology Parks to Boost Entrepreneurship Ecosystem in Turkey
Authors: Faruk Inaltekin, Imran Gurakan
Abstract:
This paper will aim to discuss the importance of value-added services provided by Science and Technology Parks for entrepreneurship development in Turkey. Entrepreneurship is vital subject for all countries. It has not only fostered economic development but also promoted innovation at local and international levels. To foster high tech entrepreneurship ecosystem, Technopark (Science and Technology Park/STP) concept was initiated with the establishment of Silicon Valley in the 1950s. The success and rise of Silicon Valley led to the spread of technopark activities. Developed economies have been setting up projects to plan and build STPs since the 1960s and 1970s. To promote the establishment of STPs, necessary legislations were made by Ministry of Science, Industry, and Technology in 2001, Technology Development Zones Law (No. 4691) and it has been revised in 2016 to provide more supports. STPs’ basic aim is to provide customers high-quality office spaces with various 'value added services' such as business development, network connections, cooperation programs, investor/customers meetings and internationalization services. For this aim, STPs should help startups deal with difficulties in the early stages and to support mature companies’ export activities in the foreign market. STPs should support the production, commercialization and more significantly internationalization of technology-intensive business and foster growth of companies. Nowadays within this value-added services, internationalization is very popular subject in the world. Most of STPs design clusters or accelerator programs in order to support their companies in the foreign market penetration. If startups are not ready for international competition, STPs should help them to get ready for foreign market with training and mentoring sessions. These training and mentoring sessions should take a goal based approach to working with companies. Each company has different needs and goals. Therefore the definition of ‘success' varies for each company. For this reason, it is very important to create customized value added services to meet the needs of startups. After local supports, STPs should also be able to support their startups in foreign market. Organizing well defined international accelerator program plays an important role in this mission. Turkey is strategically placed between key markets in Europe, Russia, Central Asia and the Middle East. Its population is young and well educated. So both government agencies and the private sectors endeavor to foster and encourage entrepreneurship ecosystem with many supports. In sum, the task of technoparks with these and similar value added services is very important for developing entrepreneurship ecosystem. The priorities of all value added services are to identify the commercialization and growth obstacles faced by entrepreneurs and get rid of them with the one-to-one customized services. Also, in order to have a healthy startup ecosystem and create sustainable entrepreneurship, stakeholders (technoparks, incubators, accelerators, investors, universities, governmental organizations etc.) should fulfill their roles and/or duties and collaborate with each other. STPs play an important role as bridge for these stakeholders & entrepreneurs. STPs always should benchmark and renew services offered to how to help the start-ups to survive, develop their business and benefit from these stakeholders.Keywords: accelerator, cluster, entrepreneurship, startup, technopark, value added services
Procedia PDF Downloads 143294 Role of Functional Divergence in Specific Inhibitor Design: Using γ-Glutamyltranspeptidase (GGT) as a Model Protein
Authors: Ved Vrat Verma, Rani Gupta, Manisha Goel
Abstract:
γ-glutamyltranspeptidase (GGT: EC 2.3.2.2) is an N-terminal nucleophile hydrolase conserved in all three domains of life. GGT plays a key role in glutathione metabolism where it catalyzes the breakage of the γ-glutamyl bonds and transfer of γ-glutamyl group to water (hydrolytic activity) or amino acids or short peptides (transpeptidase activity). GGTs from bacteria, archaea, and eukaryotes (human, rat and mouse) are homologous proteins sharing >50% sequence similarity and conserved four layered αββα sandwich like three dimensional structural fold. These proteins though similar in their structure to each other, are quite diverse in their enzyme activity: some GGTs are better at hydrolysis reactions but poor in transpeptidase activity, whereas many others may show opposite behaviour. GGT is known to be involved in various diseases like asthma, parkinson, arthritis, and gastric cancer. Its inhibition prior to chemotherapy treatments has been shown to sensitize tumours to the treatment. Microbial GGT is known to be a virulence factor too, important for the colonization of bacteria in host. However, all known inhibitors (mimics of its native substrate, glutamate) are highly toxic because they interfere with other enzyme pathways. However, a few successful efforts have been reported previously in designing species specific inhibitors. We aim to leverage the diversity seen in GGT family (pathogen vs. eukaryotes) for designing specific inhibitors. Thus, in the present study, we have used DIVERGE software to identify sites in GGT proteins, which are crucial for the functional and structural divergence of these proteins. Since, type II divergence sites vary in clade specific manner, so type II divergent sites were our focus of interest throughout the study. Type II divergent sites were identified for pathogen vs. eukaryotes clusters and sites were marked on clade specific representative structures HpGGT (2QM6) and HmGGT (4ZCG) of pathogen and eukaryotes clade respectively. The crucial divergent sites within 15 A radii of the binding cavity were highlighted, and in-silico mutations were performed on these sites to delineate the role of these sites on the mechanism of catalysis and protein folding. Further, the amino acid network (AAN) analysis was also performed by Cytoscape to delineate assortative mixing for cavity divergent sites which could strengthen our hypothesis. Additionally, molecular dynamics simulations were performed for wild complexes and mutant complexes close to physiological conditions (pH 7.0, 0.1 M ionic strength and 1 atm pressure) and the role of putative divergence sites and structural integrities of the homologous proteins have been analysed. The dynamics data were scrutinized in terms of RMSD, RMSF, non-native H-bonds and salt bridges. The RMSD, RMSF fluctuations of proteins complexes are compared, and the changes at protein ligand binding sites were highlighted. The outcomes of our study highlighted some crucial divergent sites which could be used for novel inhibitors designing in a species-specific manner. Since, for drug development, it is challenging to design novel drug by targeting similar protein which exists in eukaryotes, so this study could set up an initial platform to overcome this challenge and help to deduce the more effective targets for novel drug discovery.Keywords: γ-glutamyltranspeptidase, divergence, species-specific, drug design
Procedia PDF Downloads 268293 Reduction of the Risk of Secondary Cancer Induction Using VMAT for Head and Neck Cancer
Authors: Jalil ur Rehman, Ramesh C, Tailor, Isa Khan, Jahanzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott
Abstract:
The purpose of this analysis is to estimate secondary cancer risks after VMAT compared to other modalities of head and neck radiotherapy (IMRT, 3DCRT). Computer tomography (CT) scans of Radiological Physics Center (RPC) head and neck phantom were acquired with CT scanner and exported via DICOM to the treatment planning system (TPS). Treatment planning was done using four arc (182-178 and 180-184, clockwise and anticlockwise) for volumetric modulated arc therapy (VMAT) , Nine fields (200, 240, 280, 320,0,40,80,120 and 160), which has been commonly used at MD Anderson Cancer Center Houston for intensity modulated radiation therapy (IMRT) and four fields for three dimensional radiation therapy (3DCRT) were used. True beam linear accelerator of 6MV photon energy was used for dose delivery, and dose calculation was done with CC convolution algorithm with prescription dose of 6.6 Gy. Primary Target Volume (PTV) coverage, mean and maximal doses, DVHs and volumes receiving more than 2 Gy and 3.8 Gy of OARs were calculated and compared. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic EBT2 film, respectively. Quality Assurance of VMAT and IMRT were performed by using ArcCHECK method with gamma index criteria of 3%/3mm dose difference to distance to agreement (DD/DTA). PTV coverage was found 90.80 %, 95.80 % and 95.82 % for 3DCRT, IMRT and VMAT respectively. VMAT delivered the lowest maximal doses to esophagus (2.3 Gy), brain (4.0 Gy) and thyroid (2.3 Gy) compared to all other studied techniques. In comparison, maximal doses for 3DCRT were found higher than VMAT for all studied OARs. Whereas, IMRT delivered maximal higher doses 26%, 5% and 26% for esophagus, normal brain and thyroid, respectively, compared to VMAT. It was noted that esophagus volume receiving more than 2 Gy was 3.6 % for VMAT, 23.6 % for IMRT and up to 100 % for 3DCRT. Good agreement was observed between measured doses and those calculated with TPS. The averages relative standard errors (RSE) of three deliveries within eight TLD capsule locations were, 0.9%, 0.8% and 0.6% for 3DCRT, IMRT and VMAT, respectively. The gamma analysis for all plans met the ±5%/3 mm criteria (over 90% passed) and results of QA were greater than 98%. The calculations for maximal doses and volumes of OARs suggest that the estimated risk of secondary cancer induction after VMAT is considerably lower than IMRT and 3DCRT.Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD
Procedia PDF Downloads 507292 Maneuvering Modelling of a One-Degree-of-Freedom Articulated Vehicle: Modeling and Experimental Verification
Authors: Mauricio E. Cruz, Ilse Cervantes, Manuel J. Fabela
Abstract:
The evaluation of the maneuverability of road vehicles is generally carried out through the use of specialized computer programs due to the advantages they offer compared to the experimental method. These programs are based on purely geometric considerations of the characteristics of the vehicles, such as main dimensions, the location of the axles, and points of articulation, without considering parameters such as weight distribution and magnitude, tire properties, etc. In this paper, we address the problem of maneuverability in a semi-trailer truck to navigate urban streets, maneuvering yards, and parking lots, using the Ackerman principle to propose a kinematic model that, through geometric considerations, it is possible to determine the space necessary to maneuver safely. The model was experimentally validated by conducting maneuverability tests with an articulated vehicle. The measurements were made through a GPS that allows us to know the position, trajectory, and speed of the vehicle, an inertial motion unit (IMU) that allows measuring the accelerations and angular speeds in the semi-trailer, and an instrumented steering wheel that allows measuring the angle of rotation of the flywheel, the angular velocity and the torque applied to the flywheel. To obtain the steering angle of the tires, a parameterization of the complete travel of the steering wheel and its equivalent in the tires was carried out. For the tests, 3 different angles were selected, and 3 turns were made for each angle in both directions of rotation (left and right turn). The results showed that the proposed kinematic model achieved 95% accuracy for speeds below 5 km / h. The experiments revealed that that tighter maneuvers increased significantly the space required and that the vehicle maneuverability was limited by the size of the semi-trailer. The maneuverability was also tested as a function of the vehicle load and 3 different load levels we used: light, medium, and heavy. It was found that the internal turning radii also increased with the load, probably due to the changes in the tires' adhesion to the pavement since heavier loads had larger contact wheel-road surfaces. The load was found as an important factor affecting the precision of the model (up to 30%), and therefore I should be considered. The model obtained is expected to be used to improve maneuverability through a robust control system.Keywords: articuled vehicle, experimental validation, kinematic model, maneuverability, semi-trailer truck
Procedia PDF Downloads 117291 Decision Making on Smart Energy Grid Development for Availability and Security of Supply Achievement Using Reliability Merits
Authors: F. Iberraken, R. Medjoudj, D. Aissani
Abstract:
The development of the smart grids concept is built around two separate definitions, namely: The European one oriented towards sustainable development and the American one oriented towards reliability and security of supply. In this paper, we have investigated reliability merits enabling decision-makers to provide a high quality of service. It is based on system behavior using interruptions and failures modeling and forecasting from one hand and on the contribution of information and communication technologies (ICT) to mitigate catastrophic ones such as blackouts from the other hand. It was found that this concept has been adopted by developing and emerging countries in short and medium terms followed by sustainability concept at long term planning. This work has highlighted the reliability merits such as: Benefits, opportunities, costs and risks considered as consistent units of measuring power customer satisfaction. From the decision making point of view, we have used the analytic hierarchy process (AHP) to achieve customer satisfaction, based on the reliability merits and the contribution of such energy resources. Certainly nowadays, fossil and nuclear ones are dominating energy production but great advances are already made to jump into cleaner ones. It was demonstrated that theses resources are not only environmentally but also economically and socially sustainable. The paper is organized as follows: Section one is devoted to the introduction, where an implicit review of smart grids development is given for the two main concepts (for USA and Europeans countries). The AHP method and the BOCR developments of reliability merits against power customer satisfaction are developed in section two. The benefits where expressed by the high level of availability, maintenance actions applicability and power quality. Opportunities were highlighted by the implementation of ICT in data transfer and processing, the mastering of peak demand control, the decentralization of the production and the power system management in default conditions. Costs were evaluated using cost-benefit analysis, including the investment expenditures in network security, becoming a target to hackers and terrorists, and the profits of operating as decentralized systems, with a reduced energy not supplied, thanks to the availability of storage units issued from renewable resources and to the current power lines (CPL) enabling the power dispatcher to manage optimally the load shedding. For risks, we have razed the adhesion of citizens to contribute financially to the system and to the utility restructuring. What is the degree of their agreement compared to the guarantees proposed by the managers about the information integrity? From technical point of view, have they sufficient information and knowledge to meet a smart home and a smart system? In section three, an application of AHP method is made to achieve power customer satisfaction based on the main energy resources as alternatives, using knowledge issued from a country that has a great advance in energy mutation. Results and discussions are given in section four. It was given us to conclude that the option to a given resource depends on the attitude of the decision maker (prudent, optimistic or pessimistic), and that status quo is neither sustainable nor satisfactory.Keywords: reliability, AHP, renewable energy resources, smart grids
Procedia PDF Downloads 442290 Temperamental Determinants of Eye-Hand Coordination Formation in the Special Aerial Gymnastics Instruments (SAGI)
Authors: Zdzisław Kobos, Robert Jędrys, Zbigniew Wochyński
Abstract:
Motor activity and good health are sine qua non determinants of a proper practice of the profession, especially aviation. Therefore, candidates to the aviation are selected according their psychomotor ability by both specialist medical commissions. Moreover, they must past an examination of the physical fitness. During the studies in the air force academy, eye-hand coordination is formed in two stages. The future aircraft pilots besides all-purpose physical education must practice specialist training on SAGI. Training includes: looping, aerowheel, and gyroscope. Aim of the training on the above listed apparatuses is to form eye-hand coordination during the tasks in the air. Such coordination is necessary to perform various figures in the real flight. Therefore, during the education of the future pilots, determinants of the effective ways of this important parameter of the human body functioning are sought for. Several studies of the sport psychology indicate an important role of the temperament as a factor determining human behavior during the task performance and acquiring operating skills> Polish psychologist Jan Strelau refers to the basic, relatively constant personality features which manifest themselves in the formal characteristics of the human behavior. Temperament, being initially determined by the inborn physiological mechanisms, changes in the course of maturation and some environmental factors and concentrates on the energetic level and reaction characteristics in time. Objectives. This study aimed at seeking a relationship between temperamental features and eye-hand coordination formation during training on SAGI. Material and Methods: Group of 30 students of pilotage was examined in two situations. The first assessment of the eye-hand coordination level was carried out before the beginning of a 30-hour training on SAGI. The second assessment was carried out after training completion. Training lasted for 2 hours once a week. Temperament was evaluated with The Formal Characteristics of Behavior − Temperament Inventory (FCB-TI) developed by Bogdan Zawadzki and Jan Strelau. Eye-hand coordination was assessed with a computer version of the Warsaw System of Psychological Tests. Results: It was found that the training on SAGI increased the level of eye-hand coordination in the examined students. Conclusions: Higher level of the eye-hand coordination was obtained after completion of the training. Moreover, a relationship between eye-hand coordination level and selected temperamental features was statistically significant.Keywords: temperament, eye-hand coordination, pilot, SAGI
Procedia PDF Downloads 440289 Flood Early Warning and Management System
Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare
Abstract:
The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.Keywords: flood, modeling, HPC, FOSS
Procedia PDF Downloads 89288 Estimating the Ladder Angle and the Camera Position From a 2D Photograph Based on Applications of Projective Geometry and Matrix Analysis
Authors: Inigo Beckett
Abstract:
In forensic investigations, it is often the case that the most potentially useful recorded evidence derives from coincidental imagery, recorded immediately before or during an incident, and that during the incident (e.g. a ‘failure’ or fire event), the evidence is changed or destroyed. To an image analysis expert involved in photogrammetric analysis for Civil or Criminal Proceedings, traditional computer vision methods involving calibrated cameras is often not appropriate because image metadata cannot be relied upon. This paper presents an approach for resolving this problem, considering in particular and by way of a case study, the angle of a simple ladder shown in a photograph. The UK Health and Safety Executive (HSE) guidance document published in 2014 (INDG455) advises that a leaning ladder should be erected at 75 degrees to the horizontal axis. Personal injury cases can arise in the construction industry because a ladder is too steep or too shallow. Ad-hoc photographs of such ladders in their incident position provide a basis for analysis of their angle. This paper presents a direct approach for ascertaining the position of the camera and the angle of the ladder simultaneously from the photograph(s) by way of a workflow that encompasses a novel application of projective geometry and matrix analysis. Mathematical analysis shows that for a given pixel ratio of directly measured collinear points (i.e. features that lie on the same line segment) from the 2D digital photograph with respect to a given viewing point, we can constrain the 3D camera position to a surface of a sphere in the scene. Depending on what we know about the ladder, we can enforce another independent constraint on the possible camera positions which enables us to constrain the possible positions even further. Experiments were conducted using synthetic and real-world data. The synthetic data modeled a vertical plane with a ladder on a horizontally flat plane resting against a vertical wall. The real-world data was captured using an Apple iPhone 13 Pro and 3D laser scan survey data whereby a ladder was placed in a known location and angle to the vertical axis. For each case, we calculated camera positions and the ladder angles using this method and cross-compared them against their respective ‘true’ values.Keywords: image analysis, projective geometry, homography, photogrammetry, ladders, Forensics, Mathematical modeling, planar geometry, matrix analysis, collinear, cameras, photographs
Procedia PDF Downloads 51287 Guidelines for the Management Process Development of Research Journals in Order to Develop Suan Sunandha Rajabhat University to International Standards
Authors: Araya Yordchim, Rosjana Chandhasa, Suwaree Yordchim
Abstract:
This research aims to study guidelines on the development of management process for research journals in order to develop Suan Sunandha Rajabhat University to international standards. This research investigated affecting elements ranging from the format of the article, evaluation form for research article quality, the process of creating a scholarly journal, satisfaction level of those with knowledge and competency to conduct research, arisen problems, and solutions. Drawing upon the sample size of 40 persons who had knowledge and competency in conducting research and creating scholarly journal articles at an international level, the data for this research were collected using questionnaires as a tool. Through the usage of computer software, data were analyzed by using the statistics in the forms of frequency, percentage, mean, standard deviation, and multiple regression analysis. The majority of participants were civil servants with a doctorate degree, followed by civil servants with a master's degree. Among them, the suitability of the article format was rated at a good level while the evaluation form for research articles quality was assessed at a good level. Based on participants' viewpoints, the process of creating scholarly journals was at a good level, while the satisfaction of those who had knowledge and competency in conducting research was at a satisfactory level. The problems encountered were the difficulty in accessing the website. The solution to the problem was to develop a website with user-friendly accessibility, including setting up a Google scholar profile for the purpose of references counting and the articles being used for reference in real-time. Research article format influenced the level of satisfaction of those who had the knowledge and competency to conduct research with statistical significance at the 0.01 level. The research article quality assessment form (preface section, research article writing section, preparation for research article manuscripts section, and the original article evaluation form for the author) affected the satisfaction of those with knowledge and competency to conduct research with the statistical significance at the level of 0.01. The process of establishing journals had an impact on the satisfaction of those with knowledge and ability to conduct research with statistical significance at the level of .05Keywords: guidelines, development of management, research journals, international standards
Procedia PDF Downloads 124286 Governance of Climate Adaptation Through Artificial Glacier Technology: Lessons Learnt from Leh (Ladakh, India) In North-West Himalaya
Authors: Ishita Singh
Abstract:
Social-dimension of Climate Change is no longer peripheral to Science, Technology and Innovation (STI). Indeed, STI is being mobilized to address small farmers’ vulnerability and adaptation to Climate Change. The experiences from the cold desert of Leh (Ladakh) in North-West Himalaya illustrate the potential of STI to address the challenges of Climate Change and the needs of small farmers through the use of Artificial Glacier Techniques. Small farmers have a unique technique of water harvesting to augment irrigation, called “Artificial Glaciers” - an intricate network of water channels and dams along the upper slope of a valley that are located closer to villages and at lower altitudes than natural glaciers. It starts to melt much earlier and supplements additional irrigation to small farmers’ improving their livelihoods. Therefore, the issue of vulnerability, adaptive capacity and adaptation strategy needs to be analyzed in a local context and the communities as well as regions where people live. Leh (Ladakh) in North-West Himalaya provides a Case Study for exploring the ways in which adaptation to Climate Change is taking place at a community scale using Artificial Glacier Technology. With the above backdrop, an attempt has been made to analyze the rural poor households' vulnerability and adaptation practices to Climate Change using this technology, thereby drawing lessons on vulnerability-livelihood interactions in the cold desert of Leh (Ladakh) in North-West Himalaya, India. The study is based on primary data and information collected from 675 households confined to 27 villages of Leh (Ladakh) in North-West Himalaya, India. It reveals that 61.18% of the population is driving livelihoods from agriculture and allied activities. With increased irrigation potential due to the use of Artificial Glaciers, food security has been assured to 77.56% of households and health vulnerability has been reduced in 31% of households. Seasonal migration as a livelihood diversification mechanism has declined in nearly two-thirds of households, thereby improving livelihood strategies. Use of tactical adaptations by small farmers in response to persistent droughts, such as selling livestock, expanding agriculture lands, and use of relief cash and foods, have declined to 20.44%, 24.74% and 63% of households. However, these measures are unsustainable on a long-term basis. The role of policymakers and societal stakeholders becomes important in this context. To address livelihood challenges, the role of technology is critical in a multidisciplinary approach involving multilateral collaboration among different stakeholders. The presence of social entrepreneurs and new actors on the adaptation scene is necessary to bring forth adaptation measures. Better linkage between Science and Technology policies, together with other policies, should be encouraged. Better health care, access to safe drinking water, better sanitary conditions, and improved standards of education and infrastructure are effective measures to enhance a community’s adaptive capacity. However, social transfers for supporting climate adaptive capacity require significant amounts of additional investment. Developing institutional mechanisms for specific adaptation interventions can be one of the most effective ways of implementing a plan to enhance adaptation and build resilience.Keywords: climate change, adaptation, livelihood, stakeholders
Procedia PDF Downloads 70285 Developing Optical Sensors with Application of Cancer Detection by Elastic Light Scattering Spectroscopy
Authors: May Fadheel Estephan, Richard Perks
Abstract:
Context: Cancer is a serious health concern that affects millions of people worldwide. Early detection and treatment are essential for improving patient outcomes. However, current methods for cancer detection have limitations, such as low sensitivity and specificity. Research Aim: The aim of this study was to develop an optical sensor for cancer detection using elastic light scattering spectroscopy (ELSS). ELSS is a noninvasive optical technique that can be used to characterize the size and concentration of particles in a solution. Methodology: An optical probe was fabricated with a 100-μm-diameter core and a 132-μm centre-to-centre separation. The probe was used to measure the ELSS spectra of polystyrene spheres with diameters of 2, 0.8, and 0.413 μm. The spectra were then analysed to determine the size and concentration of the spheres. Findings: The results showed that the optical probe was able to differentiate between the three different sizes of polystyrene spheres. The probe was also able to detect the presence of polystyrene spheres in suspension concentrations as low as 0.01%. Theoretical Importance: The results of this study demonstrate the potential of ELSS for cancer detection. ELSS is a noninvasive technique that can be used to characterize the size and concentration of cells in a tissue sample. This information can be used to identify cancer cells and assess the stage of the disease. Data Collection: The data for this study were collected by measuring the ELSS spectra of polystyrene spheres with different diameters. The spectra were collected using a spectrometer and a computer. Analysis Procedures: The ELSS spectra were analysed using a software program to determine the size and concentration of the spheres. The software program used a mathematical algorithm to fit the spectra to a theoretical model. Question Addressed: The question addressed by this study was whether ELSS could be used to detect cancer cells. The results of the study showed that ELSS could be used to differentiate between different sizes of cells, suggesting that it could be used to detect cancer cells. Conclusion: The findings of this research show the utility of ELSS in the early identification of cancer. ELSS is a noninvasive method for characterizing the number and size of cells in a tissue sample. To determine cancer cells and determine the disease's stage, this information can be employed. Further research is needed to evaluate the clinical performance of ELSS for cancer detection.Keywords: elastic light scattering spectroscopy, polystyrene spheres in suspension, optical probe, fibre optics
Procedia PDF Downloads 80284 We Are the Earth That Defends Itself: An Exploration of Discursive Practices of Les Soulèvements De La Terre
Authors: Sophie Del Fa, Loup Ducol
Abstract:
This presentation will focus on the discursive practices of Les Soulèvements de la Terre (hereafter SdlT), a French environmentalist group mobilized against agribusiness. More specifically, we will use, as a case study, the violently repressed demonstration that took place in Sainte-Soline on March 25, 2023 (see after for details). The SdlT embodies the renewal of anti-capitalist and environmentalist struggles that began with Occupy Wall Street in 2009 and in France with the Nuit debout in 2016 and the yellow vests movement from 2019 to 2020. These struggles have three things in common: they are self-organized without official leaders, they rely mainly on occupations to reappropriate public places (squares, roundabouts, natural territories) and they are anti-capitalist. The SdlT was created in 2021 by activists coming from the Zone-to-Defend of Notre-Dame-des-Landes, a victorious 10 yearlong occupation movement against an airport near Nantes, France (from 2009 to 2018). The SdlT is not labeled as a formal association, nor as a constituted group, but as an anti-capitalist network of local struggles at the crossroads of ecology and social issues. Indeed, although they target agro-industry, land grabbing, soil artificialization and ecology without transition, the SdlT considers ecological and social questions as interdependent. Moreover, they have an encompassing vision of ecology that they consider as a concern for the living as a whole by erasing the division between Nature and Culture. Their radicality is structured around three main elements: federative and decentralized dimensions, the rhetoric of living alliances and militant creatives strategies. The objective of this reflexion is to understand how these three dimensions are articulated through the SdlT’s discursive practices. To explore these elements, we take as a case study one specific event: the demonstration against the ‘basins’ held in Sainte-Soline on March 25, 2023, on the construction site of new water storage infrastructure for agricultural irrigation in western France. This event represents a turning point for the SdlT. Indeed, the protest was violently repressed: 5000 grenades were fired by the police, hundreds of people were injured, and one person was still in a coma at the time of writing these lines. Moreover, following Saint-Soline’s events, the Minister of Interior Affairs, Gérald Darmin, threatened to dissolve the SdlT, thus adding fuel to the fire in an already tense social climate (with the ongoing strikes against the pensions reform). We anchor our reflexion on three types of data: 1) our own experiences (inspired by ethnography) of the Sainte-Soline demonstration; 2) the collection of more than 500 000 Tweets with the #SainteSoline hashtag and 3) a press review of texts and articles published after Sainte-Soline’s demonstration. The exploration of these data from a turning point in the history of the SdlT will allow us to analyze how the three dimensions highlighted earlier (federative and decentralized dimensions, rhetoric of living alliances and creatives militant strategies) are materialized through the discursive practices surrounding the Sainte-Soline event. This will allow us to shed light on how a new contemporary movement implements contemporary environmental struggles.Keywords: discursive practices, Sainte-Soline, Ecology, radical ecology
Procedia PDF Downloads 70283 Distribution System Modelling: A Holistic Approach for Harmonic Studies
Authors: Stanislav Babaev, Vladimir Cuk, Sjef Cobben, Jan Desmet
Abstract:
The procedures for performing harmonic studies for medium-voltage distribution feeders have become relatively mature topics since the early 1980s. The efforts of various electric power engineers and researchers were mainly focused on handling large harmonic non-linear loads connected scarcely at several buses of medium-voltage feeders. In order to assess the impact of these loads on the voltage quality of the distribution system, specific modeling and simulation strategies were proposed. These methodologies could deliver a reasonable estimation accuracy given the requirements of least computational efforts and reduced complexity. To uphold these requirements, certain analysis assumptions have been made, which became de facto standards for establishing guidelines for harmonic analysis. Among others, typical assumptions include balanced conditions of the study and the negligible impact of impedance frequency characteristics of various power system components. In latter, skin and proximity effects are usually omitted, and resistance and reactance values are modeled based on the theoretical equations. Further, the simplifications of the modelling routine have led to the commonly accepted practice of neglecting phase angle diversity effects. This is mainly associated with developed load models, which only in a handful of cases are representing the complete harmonic behavior of a certain device as well as accounting on the harmonic interaction between grid harmonic voltages and harmonic currents. While these modelling practices were proven to be reasonably effective for medium-voltage levels, similar approaches have been adopted for low-voltage distribution systems. Given modern conditions and massive increase in usage of residential electronic devices, recent and ongoing boom of electric vehicles, and large-scale installing of distributed solar power, the harmonics in current low-voltage grids are characterized by high degree of variability and demonstrate sufficient diversity leading to a certain level of cancellation effects. It is obvious, that new modelling algorithms overcoming previously made assumptions have to be accepted. In this work, a simulation approach aimed to deal with some of the typical assumptions is proposed. A practical low-voltage feeder is modeled in PowerFactory. In order to demonstrate the importance of diversity effect and harmonic interaction, previously developed measurement-based models of photovoltaic inverter and battery charger are used as loads. The Python-based script aiming to supply varying voltage background distortion profile and the associated current harmonic response of loads is used as the core of unbalanced simulation. Furthermore, the impact of uncertainty of feeder frequency-impedance characteristics on total harmonic distortion levels is shown along with scenarios involving linear resistive loads, which further alter the impedance of the system. The comparative analysis demonstrates sufficient differences with cases when all the assumptions are in place, and results indicate that new modelling and simulation procedures need to be adopted for low-voltage distribution systems with high penetration of non-linear loads and renewable generation.Keywords: electric power system, harmonic distortion, power quality, public low-voltage network, harmonic modelling
Procedia PDF Downloads 158282 A Case Study on How Biomedical Engineering (BME) Outreach Programmes Serve as An Alternative Educational Approach to Form and Develop the BME Community in Hong Kong
Authors: Sum Lau, Wing Chung Cleo Lau, Wing Yan Chu, Long Ching Ip, Wan Yin Lo, Jo Long Sam Yau, Ka Ho Hui, Sze Yi Mak
Abstract:
Biomedical engineering (BME) is an interdisciplinary subject where knowledge about biology and medicine is applied to novel applications, solving clinical problems. This subject is crucial for cities such as Hong Kong, where the burden on the medical system is rising due to reasons like the ageing population. Hong Kong, who is actively boosting technological advancements in recent years, sets BME, or biotechnology, as a major category, as reflected in the 2018-19 Budget, where biotechnology was one of the four pillars for development. Over the years, while resources in terms of money and space have been provided, there has been a lack of talents expressed by both the academia and industry. While exogenous factors, such as COVID, may have hindered talents from outside Hong Kong to come, endogenous factors should also be considered. In particular, since there are already a few local universities offering BME programmes, their curriculum or style of education requires to be reviewed to intensify the network of the BME community and support post-academic career development. It was observed that while undergraduate (UG) studies focus on knowledge teaching with some technical training and postgraduate (PG) programmes concentrate on upstream research, the programmes are generally confined to the academic sector and lack connections to the industry. In light of that, a “Biomedical Innovation and Outreach Programme 2022” (“B.I.O.2022”) was held to connect students and professors from academia with clinicians and engineers from the industry, serving as a comparative approach to conventional education methods (UG and PG programmes from tertiary institutions). Over 100 participants, including undergraduates, postgraduates, secondary school students, researchers, engineers, and clinicians, took part in various outreach events such as conference and site visits, all held from June to July 2022. As a case study, this programme aimed to tackle the aforementioned problems with the theme of “4Cs” (connection, communication, collaboration, and commercialisation). The effectiveness of the programme is investigated by its ability to serve as an adult and continuing education and the effectiveness of causing social change to tackle current societal challenges, with the focus on tackling the lack of talents engaging in biomedical engineering. In this study, B.I.O.2022 is found to be able to complement the traditional educational methods, particularly in terms of knowledge exchange between the academia and the industry. With enhanced communications between participants from different career stages, there were students who followed up to visit or even work with the professionals after the programme. Furthermore, connections between the academia and industry could foster the generation of new knowledge, which ultimately pointed to commercialisation, adding value to the BME industry while filling the gap in terms of human resources. With the continuation of events like B.I.O.2022, it provides a promising starting point for the development and relationship strengthening of a BME community in Hong Kong, and shows potential as an alternative way of adult education or learning with societal benefits.Keywords: biomedical engineering, adult education for social change, comparative methods and principles, lifelong learning, faced problems, promises, challenges and pitfalls
Procedia PDF Downloads 116281 Development of Generally Applicable Intravenous to Oral Antibiotic Switch Therapy Criteria
Authors: H. Akhloufi, M. Hulscher, J. M. Prins, I. H. Van Der Sijs, D. Melles, A. Verbon
Abstract:
Background: A timely switch from intravenous to oral antibiotic therapy has many advantages, such as reduced incidence of IV-line related infections, a decreased hospital length of stay and less workload for healthcare professionals with equivalent patient safety. Additionally, numerous studies have demonstrated significant decreases in costs of a timely intravenous to oral antibiotic therapy switch, while maintaining efficacy and safety. However, a considerable variation in iv to oral antibiotic switch therapy criteria has been described in literature. Here, we report the development of a set of iv to oral switch criteria that are generally applicable in all hospitals. Material/methods: A RAND-modified Delphi procedure, which was composed of 3 rounds, was used. This Delphi procedure is a widely used structured process to develop consensus using multiple rounds of questionnaires within a qualified panel of selected experts. The international expert panel was multidisciplinary and composed out of clinical microbiologists, infectious disease consultants and clinical pharmacists. This panel of 19 experts appraised 6 major intravenous to oral antibiotic switch therapy criteria and operationalized these criteria using 41 measurable conditions extracted from the literature. The procedure to select a concise set of iv to oral switch criteria included 2 questionnaire rounds and a face-to-face meeting. Results: The procedure resulted in the selection of 16 measurable conditions, which operationalize 6 major intravenous to oral antibiotic switch therapy criteria. The following 6 major switch therapy criteria were selected: (1) Vital signs should be good or improving when bad. (2) Signs and symptoms related to the infection have to be resolved or improved. (3) The gastrointestinal tract has to be intact and functioning. (4) The oral route should not be compromised. (5) Absence of contra-indicated infections. (6) An oral variant of the antibiotic with good bioavailability has to exist. Conclusions: This systematic stepwise method which combined evidence and expert opinion resulted in a feasible set of 6 major intravenous to oral antibiotic switch therapy criteria operationalized by 16 measurable conditions. This set of early antibiotic iv to oral switch criteria can be used in daily practice in all adult hospital patients. Future use in audits and as rules in computer assisted decision support systems will lead to improvement of antimicrobial steward ship programs.Keywords: antibiotic resistance, antibiotic stewardship, intravenous to oral, switch therapy
Procedia PDF Downloads 356280 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback
Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu
Abstract:
With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.Keywords: input performance, mobile device, slim keyboard, tactile feedback
Procedia PDF Downloads 299