Search results for: computer modelling
300 Information Technology Capabilities and Organizational Performance: Mediating Role of Strategic Benefits of It: A Comparison between China and Pakistan
Authors: Rehan Ullah
Abstract:
The primary purpose of the study is to observe the relationship that exists between the organizational information technology (IT) capabilities and the organizational performance in China and Pakistan. Nations like China and Pakistan utilize modern techno-how to enhance their production endeavors. Therefore, making a wide-ranging comparison of the manufacturing services between China and Pakistan was chosen due to numerous reasons. One reason for carrying out this comparison is to determine how IT of the two countries enhances organizational competency on small and medium-sized manufacturing enterprises (SMEs). The study hypothesized that organizational IT capabilities (IT infrastructure, IT competence) have a positive influence on organizational performance and the strategic benefits of IT have a mediating effect on the relationship between IT capability and organizational performance. To investigate the relationship between IT capabilities and organizational performance, surveys were sent to managers of small, medium-sized manufacturing organizations located in the southwestern region, Sichuan province of China, and Pakistani companies, which are located in Islamabad, Lahore, and Karachi. These cities were selected as typical representatives of each country. Organizational performance has been measured in terms of profitability, organizational success, growth, market share, and innovativeness. Out of 400 surveys distributed to different manufacturing organizations, 303 usable and valid responses were received that are analyzed in this research. The data were examined using SPSS and Smart PLS computer software. The results of the study, including the descriptive statistics of each variable, are used. The outer model has been measured with considerations to content validity, discriminant validity, and convergent validity. The path coefficients among the constructs were also computed when analyzing the structural model using the bootstrapping technique. The analysis of data from both China and Pakistan yields an identical but unique result. The results show that IT infrastructure, IT competence, strategic benefits of IT are all correlated to the performance of the organizations. Moreover, strategic benefits of IT have been proved to mediate the relationship between IT capabilities and organization performance. The author, concerning the role of IT on the performance of an organization, highlights the different aspects as well as its benefits in an organization. The overall study concludes several implications for both managers and academicians. It also provides the limitations of the study and offers recommendations for future studies and practice.Keywords: organizational performance, IT capabilities, IT infrastructure, IT competence, strategic benefits of IT, China, Pakistan
Procedia PDF Downloads 94299 In-Vitro Evaluation of the Long-Term Stability of PEDOT:PSS Coated Microelectrodes for Chronic Recording and Electrical Stimulation
Authors: A. Schander, T. Tessmann, H. Stemmann, S. Strokov, A. Kreiter, W. Lang
Abstract:
For the chronic application of neural prostheses and other brain-computer interfaces, long-term stable microelectrodes for electrical stimulation are essential. In recent years many developments were done to investigate different appropriate materials for these electrodes. One of these materials is the electrical conductive polymer poly(3,4-ethylenedioxythiophene) (PEDOT), which has lower impedance and higher charge injection capacity compared to noble metals like gold and platinum. However the long-term stability of this polymer is still unclear. Thus this paper reports on the in-vitro evaluation of the long-term stability of PEDOT coated gold microelectrodes. For this purpose a highly flexible electrocorticography (ECoG) electrode array, based on the polymer polyimide, is used. This array consists of circular gold electrodes with a diameter of 560 µm (0.25 mm2). In total 25 electrodes of this array were coated simultaneously with the polymer PEDOT:PSS in a cleanroom environment using a galvanostatic electropolymerization process. After the coating the array is additionally sterilized using a steam sterilization process (121°C, 1 bar, 20.5 min) to simulate autoclaving prior to the implantation of such an electrode array. The long-term measurements were performed in phosphate-buffered saline solution (PBS, pH 7.4) at the constant body temperature of 37°C. For the in-vitro electrical stimulation a one channel bipolar current stimulator is used. The stimulation protocol consists of a bipolar current amplitude of 5 mA (cathodal phase first), a pulse duration of 100 µs per phase, a pulse pause of 50 µs and a frequency of 1 kHz. A PEDOT:PSS coated gold electrode with an area of 1 cm2 serves as the counter electrode. The electrical stimulation is performed continuously with a total amount of 86.4 million bipolar current pulses per day. The condition of the PEDOT coated electrodes is monitored in between with electrical impedance spectroscopy measurements. The results of this study demonstrate that the PEDOT coated electrodes are stable for more than 3.6 billion bipolar current pulses. Also the unstimulated electrodes show currently no degradation after the time period of 5 months. These results indicate an appropriate long-term stability of this electrode coating for chronic recording and electrical stimulation. The long-term measurements are still continuing to investigate the life limit of this electrode coating.Keywords: chronic recording, electrical stimulation, long-term stability, microelectrodes, PEDOT
Procedia PDF Downloads 585298 Computer-Assisted Management of Building Climate and Microgrid with Model Predictive Control
Authors: Vinko Lešić, Mario Vašak, Anita Martinčević, Marko Gulin, Antonio Starčić, Hrvoje Novak
Abstract:
With 40% of total world energy consumption, building systems are developing into technically complex large energy consumers suitable for application of sophisticated power management approaches to largely increase the energy efficiency and even make them active energy market participants. Centralized control system of building heating and cooling managed by economically-optimal model predictive control shows promising results with estimated 30% of energy efficiency increase. The research is focused on implementation of such a method on a case study performed on two floors of our faculty building with corresponding sensors wireless data acquisition, remote heating/cooling units and central climate controller. Building walls are mathematically modeled with corresponding material types, surface shapes and sizes. Models are then exploited to predict thermal characteristics and changes in different building zones. Exterior influences such as environmental conditions and weather forecast, people behavior and comfort demands are all taken into account for deriving price-optimal climate control. Finally, a DC microgrid with photovoltaics, wind turbine, supercapacitor, batteries and fuel cell stacks is added to make the building a unit capable of active participation in a price-varying energy market. Computational burden of applying model predictive control on such a complex system is relaxed through a hierarchical decomposition of the microgrid and climate control, where the former is designed as higher hierarchical level with pre-calculated price-optimal power flows control, and latter is designed as lower level control responsible to ensure thermal comfort and exploit the optimal supply conditions enabled by microgrid energy flows management. Such an approach is expected to enable the inclusion of more complex building subsystems into consideration in order to further increase the energy efficiency.Keywords: price-optimal building climate control, Microgrid power flow optimisation, hierarchical model predictive control, energy efficient buildings, energy market participation
Procedia PDF Downloads 465297 Introduction to Two Artificial Boundary Conditions for Transient Seepage Problems and Their Application in Geotechnical Engineering
Authors: Shuang Luo, Er-Xiang Song
Abstract:
Many problems in geotechnical engineering, such as foundation deformation, groundwater seepage, seismic wave propagation and geothermal transfer problems, may involve analysis in the ground which can be seen as extending to infinity. To that end, consideration has to be given regarding how to deal with the unbounded domain to be analyzed by using numerical methods, such as finite element method (FEM), finite difference method (FDM) or finite volume method (FVM). A simple artificial boundary approach derived from the analytical solutions for transient radial seepage problems, is introduced. It should be noted, however, that the analytical solutions used to derive the artificial boundary are particular solutions under certain boundary conditions, such as constant hydraulic head at the origin or constant pumping rate of the well. When dealing with unbounded domains with unsteady boundary conditions, a more sophisticated artificial boundary approach to deal with the infinity of the domain is presented. By applying Laplace transforms and introducing some specially defined auxiliary variables, the global artificial boundary conditions (ABCs) are simplified to local ones so that the computational efficiency is enhanced significantly. The introduced two local ABCs are implemented in a finite element computer program so that various seepage problems can be calculated. The two approaches are first verified by the computation of a one-dimensional radial flow problem, and then tentatively applied to more general two-dimensional cylindrical problems and plane problems. Numerical calculations show that the local ABCs can not only give good results for one-dimensional axisymmetric transient flow, but also applicable for more general problems, such as axisymmetric two-dimensional cylindrical problems, and even more general planar two-dimensional flow problems for well doublet and well groups. An important advantage of the latter local boundary is its applicability for seepage under rapidly changing unsteady boundary conditions, and even the computational results on the truncated boundary are usually quite satisfactory. In this aspect, it is superior over the former local boundary. Simulation of relatively long operational time demonstrates to certain extents the numerical stability of the local boundary. The solutions of the two local ABCs are compared with each other and with those obtained by using large element mesh, which proves the satisfactory performance and obvious superiority over the large mesh model.Keywords: transient seepage, unbounded domain, artificial boundary condition, numerical simulation
Procedia PDF Downloads 294296 Prosodic Realization of Focus in the Public Speeches Delivered by Spanish Learners of English and English Native Speakers
Authors: Raúl Jiménez Vilches
Abstract:
Native (L1) speakers can mark prosodically one part of an utterance and make it more relevant as opposed to the rest of the constituents. Conversely, non-native (L2) speakers encounter problems when it comes to marking prosodically information structure in English. In fact, the L2 speaker’s choice for the prosodic realization of focus is not so clear and often obscures the intended pragmatic meaning and the communicative value in general. This paper reports some of the findings obtained in an L2 prosodic training course for Spanish learners of English within the context of public speaking. More specifically, it analyses the effects of the course experiment in relation to the non-native production of the tonic syllable to mark focus and compares it with the public speeches delivered by native English speakers. The whole experimental training was executed throughout eighteen input sessions (1,440 minutes total time) and all the sessions took place in the classroom. In particular, the first part of the course provided explicit instruction on the recognition and production of the tonic syllable and how the tonic syllable is used to express focus. The non-native and native oral presentations were acoustically analyzed using Praat software for speech analysis (7,356 words in total). The investigation adopted mixed and embedded methodologies. Quantitative information is needed when measuring acoustically the phonetic realization of focus. Qualitative data such as questionnaires, interviews, and observations were also used to interpret the quantitative data. The embedded experiment design was implemented through the analysis of the public speeches before and after the intervention. Results indicate that, even after the L2 prosodic training course, Spanish learners of English still show some major inconsistencies in marking focus effectively. Although there was occasional improvement regarding the choice for location and word classes, Spanish learners were, in general, far from achieving similar results to the ones obtained by the English native speakers in the two types of focus. The prosodic realization of focus seems to be one of the hardest areas of the English prosodic system to be mastered by Spanish learners. A funded research project is in the process of moving the present classroom-based experiment to an online environment (mobile app) and determining whether there is a more effective focus usage through CAPT (Computer-Assisted Pronunciation) tools.Keywords: focus, prosody, public speaking, Spanish learners of English
Procedia PDF Downloads 99295 Logistics and Supply Chain Management Using Smart Contracts on Blockchain
Authors: Armen Grigoryan, Milena Arakelyan
Abstract:
The idea of smart logistics is still quite a complicated one. It can be used to market products to a large number of customers or to acquire raw materials of the highest quality at the lowest cost in geographically dispersed areas. The use of smart contracts in logistics and supply chain management has the potential to revolutionize the way that goods are tracked, transported, and managed. Smart contracts are simply computer programs written in one of the blockchain programming languages (Solidity, Rust, Vyper), which are capable of self-execution once the predetermined conditions are met. They can be used to automate and streamline many of the traditional manual processes that are currently used in logistics and supply chain management, including the tracking and movement of goods, the management of inventory, and the facilitation of payments and settlements between different parties in the supply chain. Currently, logistics is a core area for companies which is concerned with transporting products between parties. Still, the problem of this sector is that its scale may lead to detainments and defaults in the delivery of goods, as well as other issues. Moreover, large distributors require a large number of workers to meet all the needs of their stores. All this may contribute to big detainments in order processing and increases the potentiality of losing orders. In an attempt to break this problem, companies have automated all their procedures, contributing to a significant augmentation in the number of businesses and distributors in the logistics sector. Hence, blockchain technology and smart contracted legal agreements seem to be suitable concepts to redesign and optimize collaborative business processes and supply chains. The main purpose of this paper is to examine the scope of blockchain technology and smart contracts in the field of logistics and supply chain management. This study discusses the research question of how and to which extent smart contracts and blockchain technology can facilitate and improve the implementation of collaborative business structures for sustainable entrepreneurial activities in smart supply chains. The intention is to provide a comprehensive overview of the existing research on the use of smart contracts in logistics and supply chain management and to identify any gaps or limitations in the current knowledge on this topic. This review aims to provide a summary and evaluation of the key findings and themes that emerge from the research, as well as to suggest potential directions for future research on the use of smart contracts in logistics and supply chain management.Keywords: smart contracts, smart logistics, smart supply chain management, blockchain and smart contracts in logistics, smart contracts for controlling supply chain management
Procedia PDF Downloads 95294 Insight2OSC: Using Electroencephalography (EEG) Rhythms from the Emotiv Insight for Musical Composition via Open Sound Control (OSC)
Authors: Constanza Levicán, Andrés Aparicio, Rodrigo F. Cádiz
Abstract:
The artistic usage of Brain-computer interfaces (BCI), initially intended for medical purposes, has increased in the past few years as they become more affordable and available for the general population. One interesting question that arises from this practice is whether it is possible to compose or perform music by using only the brain as a musical instrument. In order to approach this question, we propose a BCI for musical composition, based on the representation of some mental states as the musician thinks about sounds. We developed software, called Insight2OSC, that allows the usage of the Emotiv Insight device as a musical instrument, by sending the EEG data to audio processing software such as MaxMSP through the OSC protocol. We provide two compositional applications bundled with the software, which we call Mapping your Mental State and Thinking On. The signals produced by the brain have different frequencies (or rhythms) depending on the level of activity, and they are classified as one of the following waves: delta (0.5-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), beta (13-30 Hz), gamma (30-50 Hz). These rhythms have been found to be related to some recognizable mental states. For example, the delta rhythm is predominant in a deep sleep, while beta and gamma rhythms have higher amplitudes when the person is awake and very concentrated. Our first application (Mapping your Mental State) produces different sounds representing the mental state of the person: focused, active, relaxed or in a state similar to a deep sleep by the selection of the dominants rhythms provided by the EEG device. The second application relies on the physiology of the brain, which is divided into several lobes: frontal, temporal, parietal and occipital. The frontal lobe is related to abstract thinking and high-level functions, the parietal lobe conveys the stimulus of the body senses, the occipital lobe contains the primary visual cortex and processes visual stimulus, the temporal lobe processes auditory information and it is important for memory tasks. In consequence, our second application (Thinking On) processes the audio output depending on the users’ brain activity as it activates a specific area of the brain that can be measured using the Insight device.Keywords: BCI, music composition, emotiv insight, OSC
Procedia PDF Downloads 322293 Modernization of Translation Studies Curriculum at Higher Education Level in Armenia
Authors: A. Vahanyan
Abstract:
The paper touches upon the problem of revision and modernization of the current curriculum on translation studies at the Armenian Higher Education Institutions (HEIs). In the contemporary world where quality and speed of services provided are mostly valued, certain higher education centers in Armenia though do not demonstrate enough flexibility in terms of the revision and amendment of courses taught. This issue is present for various curricula at the university level and Translation Studies related curriculum, in particular. Technological innovations that are of great help for translators have been long ago smoothly implemented into the global Translation Industry. According to the European Master's in Translation (EMT) framework, translation service provision comprises linguistic, intercultural, information mining, thematic, and technological competencies. Therefore, to form the competencies mentioned above, the curriculum should be seriously restructured to meet the modern education and job market requirements, relevant courses should be proposed. New courses, in particular, should focus on the formation of technological competences. These suggestions have been made upon the author’s research of the problem across various HEIs in Armenia. The updated curricula should include courses aimed at familiarization with various computer-assisted translation (CAT) tools (MemoQ, Trados, OmegaT, Wordfast, etc.) in the translation process, creation of glossaries and termbases compatible with different platforms), which will ensure consistency in translation of similar texts and speeding up the translation process itself. Another aspect that may be strengthened via curriculum modification is the introduction of interdisciplinary and Project-Based Learning courses, which will enable info mining and thematic competences, which are of great importance as well. Of course, the amendment of the existing curriculum with the mentioned courses will require corresponding faculty development via training, workshops, and seminars. Finally, the provision of extensive internship with translation agencies is strongly recommended as it will ensure the synthesis of theoretical background and practical skills highly required for the specific area. Summing up, restructuring and modernization of the existing curricula on Translation Studies should focus on three major aspects, i.e., introduction of new courses that meet the global quality standards of education, professional development for faculty, and integration of extensive internship supervised by experts in the field.Keywords: competencies, curriculum, modernization, technical literacy, translation studies
Procedia PDF Downloads 131292 Examination of How Do Smart Watches Influence the Market of Luxury Watches with Particular Regard of the Buying-Reasons
Authors: Christopher Benedikt Jakob
Abstract:
In our current society, there is no need to take a look at the wristwatch to know the exact time. Smartphones, the watch in the car or the computer watch, inform us about the time too. Over hundreds of years, luxury watches have held a fascination for human beings. Consumers buy watches that cost thousands of euros, although they could buy much cheaper watches which also fulfill the function to indicate the correct time. This shows that the functional value has got a minor meaning with reference to the buying-reasons as regards luxury watches. For a few years, people have an increased demand to track data like their walking distance per day or to track their sleep for example. Smart watches enable consumers to get information about these data. There exists a trend that people intend to optimise parts of their social life, and thus they get the impression that they are able to optimise themselves as human beings. With the help of smart watches, they are able to optimise parts of their productivity and to realise their targets at the same time. These smart watches are also offered as luxury models, and the question is: how will customers of traditional luxury watches react? Therefore this study has the intention to give answers to the question why people are willing to spend an enormous amount of money on the consumption of luxury watches. The self-expression model, the relationship basis model, the functional benefit representation model and the means-end-theory are chosen as an appropriate methodology to find reasons why human beings purchase specific luxury watches and luxury smart watches. This evaluative approach further discusses these strategies concerning for example if consumers buy luxury watches/smart watches to express the current self or the ideal self and if human beings make decisions on expected results. The research critically evaluates that relationships are compared on the basis of their advantages. Luxury brands offer socio-emotional advantages like social functions of identification and that the strong brand personality of luxury watches and luxury smart watches helps customers to structure and retrieve brand awareness which simplifies the process of decision-making. One of the goals is to identify if customers know why they like specific luxury watches and dislike others although they are produced in the same country and cost comparable prices. It is very obvious that the market for luxury watches especially for luxury smart watches is changing way faster than it has been in the past. Therefore the research examines the market changing parameters in detail.Keywords: buying-behaviour, brand management, consumer, luxury watch, smart watch
Procedia PDF Downloads 210291 Hybrid Fermentation System for Improvement of Ergosterol Biosynthesis
Authors: Alexandra Tucaliuc, Alexandra C. Blaga, Anca I. Galaction, Lenuta Kloetzer, Dan Cascaval
Abstract:
Ergosterol (ergosta-5,7,22-trien-3β-ol), also known as provitamin D2, is the precursor of vitamin D2 (ergocalciferol), because it is converted under UV radiation to this vitamin. The natural sources of ergosterol are mainly the yeasts (Saccharomyces sp., Candida sp.), but it can be also found in fungus (Claviceps sp.) or plants (orchids). In the yeasts cells, ergosterol is accumulated in membranes, especially in free form in the plasma membrane, but also as esters with fatty acids in membrane lipids. The chemical synthesis of ergosterol does not represent an efficient method for its production, in these circumstances, the most attractive alternative for producing ergosterol at larger-scale remains the aerobic fermentation using S. cerevisiae on glucose or by-products from agriculture of food industry as substrates, in batch or fed-batch operating systems. The aim of this work is to analyze comparatively the influence of aeration efficiency on ergosterol production by S. cerevisiae in batch and fed-batch fermentations, by considering different levels of mixing intensity, aeration rate, and n-dodecane concentration. The effects of the studied factors are quantitatively described by means of the mathematical correlations proposed for each of the two fermentation systems, valid both for the absence and presence of oxygen-vector inside the broth. The experiments were carried out in a laboratory stirred bioreactor, provided with computer-controlled and recorded parameters. n-Dodecane was used as oxygen-vector and the ergosterol content inside the yeasts cells has been considered at the fermentation moment related to the maximum concentration of ergosterol, 9 hrs for batch process and 20 hrs for fed-batch one. Ergosterol biosynthesis is strongly dependent on the dissolved oxygen concentration. The hydrocarbon concentration exhibits a significant influence on ergosterol production mainly by accelerating the oxygen transfer rate. Regardless of n-dodecane addition, by maintaining the glucose concentration at a constant level in the fed-batch process, the amount of ergosterol accumulated into the yeasts cells has been almost tripled. In the presence of hydrocarbon, the ergosterol concentration increased by over 50%. The value of oxygen-vector concentration corresponding to the maximum level of ergosterol depends mainly on biomass concentration, due to its negative influences on broth viscosity and interfacial phenomena of air bubbles blockage through the adsorption of hydrocarbon droplets–yeast cells associations. Therefore, for the batch process, the maximum ergosterol amount was reached for 5% vol. n-dodecane, while for the fed-batch process for 10% vol. hydrocarbon.Keywords: bioreactors, ergosterol, fermentation, oxygen-vector
Procedia PDF Downloads 188290 Instant Data-Driven Robotics Fabrication of Light-Transmitting Ceramics: A Responsive Computational Modeling Workflow
Authors: Shunyi Yang, Jingjing Yan, Siyu Dong, Xiangguo Cui
Abstract:
Current architectural façade design practices incorporate various daylighting and solar radiation analysis methods. These emphasize the impact of geometry on façade design. There is scope to extend this knowledge into methods that address material translucency, porosity, and form. Such approaches can also achieve these conditions through adaptive robotic manufacturing approaches that exploit material dynamics within the design, and alleviate fabrication waste from molds, ultimately accelerating the autonomous manufacturing system. Besides analyzing the environmental solar radiant in building facade design, there is also a vacancy research area of how lighting effects can be precisely controlled by engaging the instant real-time data-driven robot control and manipulating the material properties. Ceramics carries a wide range of transmittance and deformation potentials for robotics control with the research of its material property. This paper presents one semi-autonomous system that engages with real-time data-driven robotics control, hardware kit design, environmental building studies, human interaction, and exploratory research and experiments. Our objectives are to investigate the relationship between different clay bodies or ceramics’ physio-material properties and their transmittance; to explore the feedback system of instant lighting data in robotic fabrication to achieve precise lighting effect; to design the sufficient end effector and robot behaviors for different stages of deformation. We experiment with architectural clay, as the material of the façade that is potentially translucent at a certain stage can respond to light. Studying the relationship between form, material properties, and porosity can help create different interior and exterior light effects and provide façade solutions for specific architectural functions. The key idea is to maximize the utilization of in-progress robotics fabrication and ceramics materiality to create a highly integrated autonomous system for lighting facade design and manufacture.Keywords: light transmittance, data-driven fabrication, computational design, computer vision, gamification for manufacturing
Procedia PDF Downloads 123289 Adversarial Attacks and Defenses on Deep Neural Networks
Authors: Jonathan Sohn
Abstract:
Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning
Procedia PDF Downloads 194288 Seroepidemiological Study of Toxoplasma gondii Infection in Women of Child-Bearing Age in Communities in Osun State, Nigeria
Authors: Olarinde Olaniran, Oluyomi A. Sowemimo
Abstract:
Toxoplasmosis is frequently misdiagnosed or underdiagnosed, and it is the third most common cause of hospitalization due to food-borne infection. Intra-uterine infection with Toxoplasma gondii due to active parasitaemia during pregnancy can cause severe and often fatal cerebral damage, abortion, and stillbirth of a fetus. The aim of the study was to investigate the prevalence of T. gondii infection in women of childbearing age in selected communities of Osun State with a view to determining the risk factors which predispose to the T. gondii infection. Five (5) ml of blood was collected by venopuncture into a plain blood collection tube by a medical laboratory scientist. Serum samples were separated by centrifuging the blood samples at 3000 rpm for 5 mins. The sera were collected with Eppendorf tubes and stored at -20°C analysis for the presence of IgG and IgM antibodies against T. gondii by commercially available enzyme-linked immunosorbent assay (ELISA) kit (Demeditec Diagnostics GmbH, Germany) conducted according to the manufacturer’s instructions. The optical densities of wells were measured by a photometer at a wavelength of 450 nm. Data collected were analysed using appropriate computer software. The overall seroprevalence of T. gondii among the women of child-bearing age in selected seven communities in Osun state was 76.3%. Out of 76.3% positive for Toxoplasma gondii infection, 70.0% were positive for anti- T. gondii IgG, and 32.3% were positive for IgM, and 26.7% for both IgG and IgM. The prevalence of T. gondii was lowest (58.9%) among women from Ile Ife, a peri-urban community, and highest (100%) in women residing in Alajue, a rural community. The prevalence of infection was significantly higher (P= 0.000) among Islamic women (87.5%) than in Christian women (70.8%). The highest prevalence (86.3%) was recorded in women with primary education, while the lowest (61.2%) was recorded in women with tertiary education (p =0.016). The highest prevalence (79.7%) was recorded in women that reside in rural areas, and the lowest (70.1%) was recorded in women that reside in peri-urban area (p=0.025). The prevalence of T. gondii infection was highest (81.4%) in women with one miscarriage, while the prevalence was lowest in women with no miscarriages (75.9%). The age of the women (p=0.042), Islamic religion (p=0.001), the residence of the women (p=0.001), and water source were all positively associated with T. gondii infection. The study concluded that there was a high seroprevalence of T. gondii recorded among women of child-bearing age in the study area. Hence, there is a need for health education and create awareness of the disease and its transmission to women of reproductive age group in general and pregnant women in particular to reduce the risk of T. gondii in pregnant women.Keywords: seroepidemiology, Toxoplasma gondii, women, child-bearing, age, communities, Ile -Ife, Nigeria
Procedia PDF Downloads 177287 Implementing Online Blogging in Specific Context Using Process-Genre Writing Approach in Saudi EFL Writing Class to Improve Writing Learning and Teaching Quality
Authors: Sultan Samah A. Alenezi
Abstract:
Many EFL teachers are eager to look into the best way to suit the needs of their students in EFL writing courses. Numerous studies suggest that online blogging may present a social interaction opportunity for EFL writing students. Additionally, it can foster peer collaboration and social support in the form of scaffolding, which, when viewed from the perspective of socio-cultural theory, can boost social support and foster the development of students' writing abilities. This idea is based on Vygotsky's theories, which emphasize how collaboration and social interaction facilitate effective learning. In Saudi Arabia, students are taught to write using conventional methods that are totally under the teacher's control. Without any peer contact or cooperation, students are spoon-fed in a passive environment. This study included the cognitive processes of the genre-process approach into the EFL writing classroom to facilitate the use of internet blogging in EFL writing education. Thirty second-year undergraduate students from the Department of Languages and Translation at a Saudi college participated in this study. This study employed an action research project that blended qualitative and quantitative methodologies to comprehend Saudi students' perceptions and experiences with internet blogging in an EFL process-genre writing classroom. It also looked at the advantages and challenges people faced when blogging. They included a poll, interviews, and blog postings made by students. The intervention's outcomes showed that merging genre-process procedures with blogging was a successful tactic, and the Saudi students' perceptions of this method of online blogging for EFL writing were quite positive. The socio-cultural theory constructs that Vygotsky advocates, such as scaffolding, collaboration, and social interaction, were also improved by blogging. These elements demonstrated the improvement in the students' written, reading, social, and collaborative thinking skills, as well as their positive attitudes toward English-language writing. But the students encountered a variety of problems that made blogging difficult for them. These problems ranged from technological ones, such sluggish internet connections, to learner inadequacies, like a lack of computer know-how and ineffective time management.Keywords: blogging, process-gnere approach, saudi learenrs, writing quality
Procedia PDF Downloads 120286 Generative Pre-Trained Transformers (GPT-3) and Their Impact on Higher Education
Authors: Sheelagh Heugh, Michael Upton, Kriya Kalidas, Stephen Breen
Abstract:
This article aims to create awareness of the opportunities and issues the artificial intelligence (AI) tool GPT-3 (Generative Pre-trained Transformer-3) brings to higher education. Technological disruptors have featured in higher education (HE) since Konrad Klaus developed the first functional programmable automatic digital computer. The flurry of technological advances, such as personal computers, smartphones, the world wide web, search engines, and artificial intelligence (AI), have regularly caused disruption and discourse across the educational landscape around harnessing the change for the good. Accepting AI influences are inevitable; we took mixed methods through participatory action research and evaluation approach. Joining HE communities, reviewing the literature, and conducting our own research around Chat GPT-3, we reviewed our institutional approach to changing our current practices and developing policy linked to assessments and the use of Chat GPT-3. We review the impact of GPT-3, a high-powered natural language processing (NLP) system first seen in 2020 on HE. Historically HE has flexed and adapted with each technological advancement, and the latest debates for educationalists are focusing on the issues around this version of AI which creates natural human language text from prompts and other forms that can generate code and images. This paper explores how Chat GPT-3 affects the current educational landscape: we debate current views around plagiarism, research misconduct, and the credibility of assessment and determine the tool's value in developing skills for the workplace and enhancing critical analysis skills. These questions led us to review our institutional policy and explore the effects on our current assessments and the development of new assessments. Conclusions: After exploring the pros and cons of Chat GTP-3, it is evident that this form of AI cannot be un-invented. Technology needs to be harnessed for positive outcomes in higher education. We have observed that materials developed through AI and potential effects on our development of future assessments and teaching methods. Materials developed through Chat GPT-3 can still aid student learning but lead to redeveloping our institutional policy around plagiarism and academic integrity.Keywords: artificial intelligence, Chat GPT-3, intellectual property, plagiarism, research misconduct
Procedia PDF Downloads 89285 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting
Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade
Abstract:
The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit
Procedia PDF Downloads 166284 GIS Technology for Environmentally Polluted Sites with Innovative Process to Improve the Quality and Assesses the Environmental Impact Assessment (EIA)
Authors: Hamad Almebayedh, Chuxia Lin, Yu wang
Abstract:
The environmental impact assessment (EIA) must be improved, assessed, and quality checked for human and environmental health and safety. Soil contamination is expanding, and sites and soil remediation activities proceeding around the word which simplifies the answer “quality soil characterization” will lead to “quality EIA” to illuminate the contamination level and extent and reveal the unknown for the way forward to remediate, countifying, containing, minimizing and eliminating the environmental damage. Spatial interpolation methods play a significant role in decision making, planning remediation strategies, environmental management, and risk assessment, as it provides essential elements towards site characterization, which need to be informed into the EIA. The Innovative 3D soil mapping and soil characterization technology presented in this research paper reveal the unknown information and the extent of the contaminated soil in specific and enhance soil characterization information in general which will be reflected in improving the information provided in developing the EIA related to specific sites. The foremost aims of this research paper are to present novel 3D mapping technology to quality and cost-effectively characterize and estimate the distribution of key soil characteristics in contaminated sites and develop Innovative process/procedure “assessment measures” for EIA quality and assessment. The contaminated site and field investigation was conducted by innovative 3D mapping technology to characterize the composition of petroleum hydrocarbons contaminated soils in a decommissioned oilfield waste pit in Kuwait. The results show the depth and extent of the contamination, which has been interred into a developed assessment process and procedure for the EIA quality review checklist to enhance the EIA and drive remediation and risk assessment strategies. We have concluded that to minimize the possible adverse environmental impacts on the investigated site in Kuwait, the soil-capping approach may be sufficient and may represent a cost-effective management option as the environmental risk from the contaminated soils is considered to be relatively low. This research paper adopts a multi-method approach involving reviewing the existing literature related to the research area, case studies, and computer simulation.Keywords: quality EIA, spatial interpolation, soil characterization, contaminated site
Procedia PDF Downloads 88283 A Unified Approach for Digital Forensics Analysis
Authors: Ali Alshumrani, Nathan Clarke, Bogdan Ghite, Stavros Shiaeles
Abstract:
Digital forensics has become an essential tool in the investigation of cyber and computer-assisted crime. Arguably, given the prevalence of technology and the subsequent digital footprints that exist, it could have a significant role across almost all crimes. However, the variety of technology platforms (such as computers, mobiles, Closed-Circuit Television (CCTV), Internet of Things (IoT), databases, drones, cloud computing services), heterogeneity and volume of data, forensic tool capability, and the investigative cost make investigations both technically challenging and prohibitively expensive. Forensic tools also tend to be siloed into specific technologies, e.g., File System Forensic Analysis Tools (FS-FAT) and Network Forensic Analysis Tools (N-FAT), and a good deal of data sources has little to no specialist forensic tools. Increasingly it also becomes essential to compare and correlate evidence across data sources and to do so in an efficient and effective manner enabling an investigator to answer high-level questions of the data in a timely manner without having to trawl through data and perform the correlation manually. This paper proposes a Unified Forensic Analysis Tool (U-FAT), which aims to establish a common language for electronic information and permit multi-source forensic analysis. Core to this approach is the identification and development of forensic analyses that automate complex data correlations, enabling investigators to investigate cases more efficiently. The paper presents a systematic analysis of major crime categories and identifies what forensic analyses could be used. For example, in a child abduction, an investigation team might have evidence from a range of sources including computing devices (mobile phone, PC), CCTV (potentially a large number), ISP records, and mobile network cell tower data, in addition to third party databases such as the National Sex Offender registry and tax records, with the desire to auto-correlate and across sources and visualize in a cognitively effective manner. U-FAT provides a holistic, flexible, and extensible approach to providing digital forensics in technology, application, and data-agnostic manner, providing powerful and automated forensic analysis.Keywords: digital forensics, evidence correlation, heterogeneous data, forensics tool
Procedia PDF Downloads 196282 Impact of Intelligent Transportation System on Planning, Operation and Safety of Urban Corridor
Authors: Sourabh Jain, S. S. Jain
Abstract:
Intelligent transportation system (ITS) is the application of technologies for developing a user–friendly transportation system to extend the safety and efficiency of urban transportation systems in developing countries. These systems involve vehicles, drivers, passengers, road operators, managers of transport services; all interacting with each other and the surroundings to boost the security and capacity of road systems. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. Intelligent transportation system is a product of the revolution in information and communications technologies that is the hallmark of the digital age. The basic ITS technology is oriented on three main directions: communications, information, integration. Information acquisition (collection), processing, integration, and sorting are the basic activities of ITS. In the paper, attempts have been made to present the endeavor that was made to interpret and evaluate the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of six lanes as well as eight lanes divided road network. Two categories of data have been collected such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, stop watch, radar gun, and mobile GPS (GPS tracker lite). From the analysis, the performance interpretations incorporated were the identification of peak and off-peak hours, congestion and level of service (LOS) at midblock sections and delay followed by plotting the speed contours. The paper proposed the urban corridor management strategies based on sensors integrated into both vehicles and on the roads that those have to be efficiently executable, cost-effective, and familiar to road users. It will be useful to reduce congestion, fuel consumption, and pollution so as to provide comfort, safety, and efficiency to the users.Keywords: ITS strategies, congestion, planning, mobility, safety
Procedia PDF Downloads 179281 Femoral Neck Anteversion and Neck-Shaft Angles: Determination and Their Clinical Implications in Fetuses of Different Gestational Ages
Authors: Vrinda Hari Ankolekar, Anne D. Souza, Mamatha Hosapatna
Abstract:
Introduction: Precise anatomical assessment of femoral neck anteversion (FNA) and the neck shaft angles (NSA) would be essential in diagnosing the pathological conditions involving hip joint and its ligaments. FNA of greater than 20 degrees is considered excessive femoral anteversion, whereas a torsion angle of fewer than 10 degrees is considered femoral retroversion. Excessive femoral torsion is not uncommon and has been associated with certain neurologic and orthopedic conditions. The enlargement and maturation of the hip joint increases at the 20th week of gestation and the NSA ranges from 135- 140◦ at birth. Material and methods: 48 femurs were tagged according to the GA and two photographs for each femur were taken using Nikon digital camera. Each femur was kept on a horizontal hard desk and end on an image of the upper end was taken for the estimation of FNA and a photograph in a perpendicular plane was taken to calculate the NSA. The images were transferred to the computer and were stored in TIFF format. Microsoft Paint software was used to mark the points and Image J software was used to calculate the angles digitally. 1. Calculation of FNA: The midpoint of the femoral head and the neck were marked and a line was drawn joining these two points. The angle made by this line with the horizontal plane was measured as FNA. 2. Calculation of NSA: The midpoint of the femoral head and the neck were marked and a line was drawn joining these two points. A vertical line was drawn passing through the tip of the greater trochanter to the inter-condylar notch. The angle formed by these lines was calculated as NSA. Results: The paired t-test for the inter-observer variability showed no significant difference between the values of two observers. (FNA: t=-1.06 and p=0.31; NSA: t=-0.09 and p=0.9). The FNA ranged from 17.08º to 33.97 º on right and 17.32 º to 45.08 º on left. The NSA ranged from 139.33 º to 124.91 º on right and 143.98 º to 123.8 º on left. Unpaired t-test was applied to compare the mean angles between the second and third trimesters which did not show any statistical significance. This shows that the FNA and NSA of femur did not vary significantly during the third trimester. The FNA and NSA were correlated with the GA using Pearson’s correlation. FNA appeared to increase with the GA (r=0.5) but the increase was not statistically significant. A decrease in the NSA was also noted with the GA (r=-0.3) which was also statistically not significant. Conclusion: The present study evaluates the FNA and NSA of the femur in fetuses and correlates their development with the GA during second and third trimesters. The FNA and NSA did not vary significantly during the third trimester.Keywords: anteversion, coxa antetorsa, femoral torsion, femur neck shaft angle
Procedia PDF Downloads 319280 Inappropriate Effects Which the Use of Computer and Playing Video Games Have on Young People
Authors: Maja Ruzic-Baf, Mirjana Radetic-Paic
Abstract:
The use of computers by children has many positive aspects, including the development of memory, learning methods, problem-solving skills and the feeling of one’s own competence and self-confidence. Playing on line video games can encourage hanging out with peers having similar interests as well as communication; it develops coordination, spatial relations and presentation. On the other hand, the Internet enables quick access to different information and the exchange of experiences. How kids use computers and what the negative effects of this can be depends on various factors. ICT has improved and become easy to get for everyone. In the past 12 years so many video games has been made even to that level that some of them are free to play. Young people, even some adults, had simply start to forget about the real outside world because in that other, digital world, they have found something that makes them feal more worthy as a man. This article present the use of ICT, forms of behavior and addictions to on line video games. The use of computers by children has many positive aspects, including the development of memory, learning methods, problem-solving skills and the feeling of one’s own competence and self-confidence. Playing on line video games can encourage hanging out with peers having similar interests as well as communication; it develops coordination, spatial relations and presentation. On the other hand, the Internet enables quick access to different information and the exchange of experiences. How kids use computers and what the negative effects of this can be depends on various factors. ICT has improved and become easy to get for everyone. In the past 12 years so many video games has been made even to that level that some of them are free to play. Young people, even some adults, had simply start to forget about the real outside world because in that other, digital world, they have found something that makes them feal more worthy as a man. This article present the use of ICT, forms of behavior and addictions to on line video games.Keywords: addiction to video games, behaviour, ICT, young people
Procedia PDF Downloads 545279 Reduction of the Risk of Secondary Cancer Induction Using VMAT for Head and Neck Cancer
Authors: Jalil ur Rehman, Ramesh C, Tailor, Isa Khan, Jahanzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott
Abstract:
The purpose of this analysis is to estimate secondary cancer risks after VMAT compared to other modalities of head and neck radiotherapy (IMRT, 3DCRT). Computer tomography (CT) scans of Radiological Physics Center (RPC) head and neck phantom were acquired with CT scanner and exported via DICOM to the treatment planning system (TPS). Treatment planning was done using four arc (182-178 and 180-184, clockwise and anticlockwise) for volumetric modulated arc therapy (VMAT) , Nine fields (200, 240, 280, 320,0,40,80,120 and 160), which has been commonly used at MD Anderson Cancer Center Houston for intensity modulated radiation therapy (IMRT) and four fields for three dimensional radiation therapy (3DCRT) were used. True beam linear accelerator of 6MV photon energy was used for dose delivery, and dose calculation was done with CC convolution algorithm with prescription dose of 6.6 Gy. Primary Target Volume (PTV) coverage, mean and maximal doses, DVHs and volumes receiving more than 2 Gy and 3.8 Gy of OARs were calculated and compared. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic EBT2 film, respectively. Quality Assurance of VMAT and IMRT were performed by using ArcCHECK method with gamma index criteria of 3%/3mm dose difference to distance to agreement (DD/DTA). PTV coverage was found 90.80 %, 95.80 % and 95.82 % for 3DCRT, IMRT and VMAT respectively. VMAT delivered the lowest maximal doses to esophagus (2.3 Gy), brain (4.0 Gy) and thyroid (2.3 Gy) compared to all other studied techniques. In comparison, maximal doses for 3DCRT were found higher than VMAT for all studied OARs. Whereas, IMRT delivered maximal higher doses 26%, 5% and 26% for esophagus, normal brain and thyroid, respectively, compared to VMAT. It was noted that esophagus volume receiving more than 2 Gy was 3.6 % for VMAT, 23.6 % for IMRT and up to 100 % for 3DCRT. Good agreement was observed between measured doses and those calculated with TPS. The averages relative standard errors (RSE) of three deliveries within eight TLD capsule locations were, 0.9%, 0.8% and 0.6% for 3DCRT, IMRT and VMAT, respectively. The gamma analysis for all plans met the ±5%/3 mm criteria (over 90% passed) and results of QA were greater than 98%. The calculations for maximal doses and volumes of OARs suggest that the estimated risk of secondary cancer induction after VMAT is considerably lower than IMRT and 3DCRT.Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD
Procedia PDF Downloads 507278 Validation of a Placebo Method with Potential for Blinding in Ultrasound-Guided Dry Needling
Authors: Johnson C. Y. Pang, Bo Peng, Kara K. L. Reeves, Allan C. L. Fud
Abstract:
Objective: Dry needling (DN) has long been used as a treatment method for various musculoskeletal pain conditions. However, the evidence level of the studies was low due to the limitations of the methodology. Lack of randomization and inappropriate blinding is potentially the main sources of bias. A method that can differentiate clinical results due to the targeted experimental procedure from its placebo effect is needed to enhance the validity of the trial. Therefore, this study aimed to validate the method as a placebo ultrasound(US)-guided DN for patients with knee osteoarthritis (KOA). Design: This is a randomized controlled trial (RCT). Ninety subjects (25 males and 65 females) aged between 51 and 80 (61.26 ± 5.57) with radiological KOA were recruited and randomly assigned into three groups with a computer program. Group 1 (G1) received real US-guided DN, Group 2 (G2) received placebo US-guided DN, and Group 3 (G3) was the control group. Both G1 and G2 subjects received the same procedure of US-guided DN, except the US monitor was turned off in G2, blinding the G2 subjects to the incorporation of faux US guidance. This arrangement created the placebo effect intended to permit comparison of their results to those who received actual US-guided DN. Outcome measures, including the visual analog scale (VAS) and Knee injury and Osteoarthritis Outcome Score (KOOS) subscales of pain, symptoms, and quality of life (QOL), were analyzed by repeated measures analysis of covariance (ANCOVA) for time effects and group effects. The data regarding the perception of receiving real US-guided DN or placebo US-guided DN were analyzed by the chi-squared test. The missing data were analyzed with the intention-to-treat (ITT) approach if more than 5% of the data were missing. Results: The placebo US-guided DN (G2) subjects had the same perceptions as the use of real US guidance in the advancement of DN (p<0.128). G1 had significantly higher pain reduction (VAS and KOOS-pain) than G2 and G3 at 8 weeks (both p<0.05) only. There was no significant difference between G2 and G3 at 8 weeks (both p>0.05). Conclusion: The method with the US monitor turned off during the application of DN is credible for blinding the participants and allowing researchers to incorporate faux US guidance. The validated placebo US-guided DN technique can aid in investigations of the effects of US-guided DN with short-term effects of pain reduction for patients with KOA. Acknowledgment: This work was supported by the Caritas Institute of Higher Education [grant number IDG200101].Keywords: ultrasound-guided dry needling, dry needling, knee osteoarthritis, physiotheraphy
Procedia PDF Downloads 119277 Temperamental Determinants of Eye-Hand Coordination Formation in the Special Aerial Gymnastics Instruments (SAGI)
Authors: Zdzisław Kobos, Robert Jędrys, Zbigniew Wochyński
Abstract:
Motor activity and good health are sine qua non determinants of a proper practice of the profession, especially aviation. Therefore, candidates to the aviation are selected according their psychomotor ability by both specialist medical commissions. Moreover, they must past an examination of the physical fitness. During the studies in the air force academy, eye-hand coordination is formed in two stages. The future aircraft pilots besides all-purpose physical education must practice specialist training on SAGI. Training includes: looping, aerowheel, and gyroscope. Aim of the training on the above listed apparatuses is to form eye-hand coordination during the tasks in the air. Such coordination is necessary to perform various figures in the real flight. Therefore, during the education of the future pilots, determinants of the effective ways of this important parameter of the human body functioning are sought for. Several studies of the sport psychology indicate an important role of the temperament as a factor determining human behavior during the task performance and acquiring operating skills> Polish psychologist Jan Strelau refers to the basic, relatively constant personality features which manifest themselves in the formal characteristics of the human behavior. Temperament, being initially determined by the inborn physiological mechanisms, changes in the course of maturation and some environmental factors and concentrates on the energetic level and reaction characteristics in time. Objectives. This study aimed at seeking a relationship between temperamental features and eye-hand coordination formation during training on SAGI. Material and Methods: Group of 30 students of pilotage was examined in two situations. The first assessment of the eye-hand coordination level was carried out before the beginning of a 30-hour training on SAGI. The second assessment was carried out after training completion. Training lasted for 2 hours once a week. Temperament was evaluated with The Formal Characteristics of Behavior − Temperament Inventory (FCB-TI) developed by Bogdan Zawadzki and Jan Strelau. Eye-hand coordination was assessed with a computer version of the Warsaw System of Psychological Tests. Results: It was found that the training on SAGI increased the level of eye-hand coordination in the examined students. Conclusions: Higher level of the eye-hand coordination was obtained after completion of the training. Moreover, a relationship between eye-hand coordination level and selected temperamental features was statistically significant.Keywords: temperament, eye-hand coordination, pilot, SAGI
Procedia PDF Downloads 440276 Estimating the Ladder Angle and the Camera Position From a 2D Photograph Based on Applications of Projective Geometry and Matrix Analysis
Authors: Inigo Beckett
Abstract:
In forensic investigations, it is often the case that the most potentially useful recorded evidence derives from coincidental imagery, recorded immediately before or during an incident, and that during the incident (e.g. a ‘failure’ or fire event), the evidence is changed or destroyed. To an image analysis expert involved in photogrammetric analysis for Civil or Criminal Proceedings, traditional computer vision methods involving calibrated cameras is often not appropriate because image metadata cannot be relied upon. This paper presents an approach for resolving this problem, considering in particular and by way of a case study, the angle of a simple ladder shown in a photograph. The UK Health and Safety Executive (HSE) guidance document published in 2014 (INDG455) advises that a leaning ladder should be erected at 75 degrees to the horizontal axis. Personal injury cases can arise in the construction industry because a ladder is too steep or too shallow. Ad-hoc photographs of such ladders in their incident position provide a basis for analysis of their angle. This paper presents a direct approach for ascertaining the position of the camera and the angle of the ladder simultaneously from the photograph(s) by way of a workflow that encompasses a novel application of projective geometry and matrix analysis. Mathematical analysis shows that for a given pixel ratio of directly measured collinear points (i.e. features that lie on the same line segment) from the 2D digital photograph with respect to a given viewing point, we can constrain the 3D camera position to a surface of a sphere in the scene. Depending on what we know about the ladder, we can enforce another independent constraint on the possible camera positions which enables us to constrain the possible positions even further. Experiments were conducted using synthetic and real-world data. The synthetic data modeled a vertical plane with a ladder on a horizontally flat plane resting against a vertical wall. The real-world data was captured using an Apple iPhone 13 Pro and 3D laser scan survey data whereby a ladder was placed in a known location and angle to the vertical axis. For each case, we calculated camera positions and the ladder angles using this method and cross-compared them against their respective ‘true’ values.Keywords: image analysis, projective geometry, homography, photogrammetry, ladders, Forensics, Mathematical modeling, planar geometry, matrix analysis, collinear, cameras, photographs
Procedia PDF Downloads 52275 Innovations in the Implementation of Preventive Strategies and Measuring Their Effectiveness Towards the Prevention of Harmful Incidents to People with Mental Disabilities who Receive Home and Community Based Services
Authors: Carlos V. Gonzalez
Abstract:
Background: Providers of in-home and community based services strive for the elimination of preventable harm to the people under their care as well as to the employees who support them. Traditional models of safety and protection from harm have assumed that the absence of incidents of harm is a good indicator of safe practices. However, this model creates an illusion of safety that is easily shaken by sudden and inadvertent harmful events. As an alternative, we have developed and implemented an evidence-based resilient model of safety known as C.O.P.E. (Caring, Observing, Predicting and Evaluating). Within this model, safety is not defined by the absence of harmful incidents, but by the presence of continuous monitoring, anticipation, learning, and rapid response to events that may lead to harm. Objective: The objective was to evaluate the effectiveness of the C.O.P.E. model for the reduction of harm to individuals with mental disabilities who receive home and community based services. Methods: Over the course of 2 years we counted the number of incidents of harm and near misses. We trained employees on strategies to eliminate incidents before they fully escalated. We trained employees to track different levels of patient status within a scale from 0 to 10. Additionally, we provided direct support professionals and supervisors with customized smart phone applications to track and notify the team of changes in that status every 30 minutes. Finally, the information that we collected was saved in a private computer network that analyzes and graphs the outcome of each incident. Result and conclusions: The use of the COPE model resulted in: A reduction in incidents of harm. A reduction the use of restraints and other physical interventions. An increase in Direct Support Professional’s ability to detect and respond to health problems. Improvement in employee alertness by decreasing sleeping on duty. Improvement in caring and positive interaction between Direct Support Professionals and the person who is supported. Developing a method to globally measure and assess the effectiveness of prevention from harm plans. Future applications of the COPE model for the reduction of harm to people who receive home and community based services are discussed.Keywords: harm, patients, resilience, safety, mental illness, disability
Procedia PDF Downloads 447274 Guidelines for the Management Process Development of Research Journals in Order to Develop Suan Sunandha Rajabhat University to International Standards
Authors: Araya Yordchim, Rosjana Chandhasa, Suwaree Yordchim
Abstract:
This research aims to study guidelines on the development of management process for research journals in order to develop Suan Sunandha Rajabhat University to international standards. This research investigated affecting elements ranging from the format of the article, evaluation form for research article quality, the process of creating a scholarly journal, satisfaction level of those with knowledge and competency to conduct research, arisen problems, and solutions. Drawing upon the sample size of 40 persons who had knowledge and competency in conducting research and creating scholarly journal articles at an international level, the data for this research were collected using questionnaires as a tool. Through the usage of computer software, data were analyzed by using the statistics in the forms of frequency, percentage, mean, standard deviation, and multiple regression analysis. The majority of participants were civil servants with a doctorate degree, followed by civil servants with a master's degree. Among them, the suitability of the article format was rated at a good level while the evaluation form for research articles quality was assessed at a good level. Based on participants' viewpoints, the process of creating scholarly journals was at a good level, while the satisfaction of those who had knowledge and competency in conducting research was at a satisfactory level. The problems encountered were the difficulty in accessing the website. The solution to the problem was to develop a website with user-friendly accessibility, including setting up a Google scholar profile for the purpose of references counting and the articles being used for reference in real-time. Research article format influenced the level of satisfaction of those who had the knowledge and competency to conduct research with statistical significance at the 0.01 level. The research article quality assessment form (preface section, research article writing section, preparation for research article manuscripts section, and the original article evaluation form for the author) affected the satisfaction of those with knowledge and competency to conduct research with the statistical significance at the level of 0.01. The process of establishing journals had an impact on the satisfaction of those with knowledge and ability to conduct research with statistical significance at the level of .05Keywords: guidelines, development of management, research journals, international standards
Procedia PDF Downloads 124273 Developing Optical Sensors with Application of Cancer Detection by Elastic Light Scattering Spectroscopy
Authors: May Fadheel Estephan, Richard Perks
Abstract:
Context: Cancer is a serious health concern that affects millions of people worldwide. Early detection and treatment are essential for improving patient outcomes. However, current methods for cancer detection have limitations, such as low sensitivity and specificity. Research Aim: The aim of this study was to develop an optical sensor for cancer detection using elastic light scattering spectroscopy (ELSS). ELSS is a noninvasive optical technique that can be used to characterize the size and concentration of particles in a solution. Methodology: An optical probe was fabricated with a 100-μm-diameter core and a 132-μm centre-to-centre separation. The probe was used to measure the ELSS spectra of polystyrene spheres with diameters of 2, 0.8, and 0.413 μm. The spectra were then analysed to determine the size and concentration of the spheres. Findings: The results showed that the optical probe was able to differentiate between the three different sizes of polystyrene spheres. The probe was also able to detect the presence of polystyrene spheres in suspension concentrations as low as 0.01%. Theoretical Importance: The results of this study demonstrate the potential of ELSS for cancer detection. ELSS is a noninvasive technique that can be used to characterize the size and concentration of cells in a tissue sample. This information can be used to identify cancer cells and assess the stage of the disease. Data Collection: The data for this study were collected by measuring the ELSS spectra of polystyrene spheres with different diameters. The spectra were collected using a spectrometer and a computer. Analysis Procedures: The ELSS spectra were analysed using a software program to determine the size and concentration of the spheres. The software program used a mathematical algorithm to fit the spectra to a theoretical model. Question Addressed: The question addressed by this study was whether ELSS could be used to detect cancer cells. The results of the study showed that ELSS could be used to differentiate between different sizes of cells, suggesting that it could be used to detect cancer cells. Conclusion: The findings of this research show the utility of ELSS in the early identification of cancer. ELSS is a noninvasive method for characterizing the number and size of cells in a tissue sample. To determine cancer cells and determine the disease's stage, this information can be employed. Further research is needed to evaluate the clinical performance of ELSS for cancer detection.Keywords: elastic light scattering spectroscopy, polystyrene spheres in suspension, optical probe, fibre optics
Procedia PDF Downloads 82272 “I” on the Web: Social Penetration Theory Revised
Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology
Abstract:
The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information
Procedia PDF Downloads 371271 Development of Generally Applicable Intravenous to Oral Antibiotic Switch Therapy Criteria
Authors: H. Akhloufi, M. Hulscher, J. M. Prins, I. H. Van Der Sijs, D. Melles, A. Verbon
Abstract:
Background: A timely switch from intravenous to oral antibiotic therapy has many advantages, such as reduced incidence of IV-line related infections, a decreased hospital length of stay and less workload for healthcare professionals with equivalent patient safety. Additionally, numerous studies have demonstrated significant decreases in costs of a timely intravenous to oral antibiotic therapy switch, while maintaining efficacy and safety. However, a considerable variation in iv to oral antibiotic switch therapy criteria has been described in literature. Here, we report the development of a set of iv to oral switch criteria that are generally applicable in all hospitals. Material/methods: A RAND-modified Delphi procedure, which was composed of 3 rounds, was used. This Delphi procedure is a widely used structured process to develop consensus using multiple rounds of questionnaires within a qualified panel of selected experts. The international expert panel was multidisciplinary and composed out of clinical microbiologists, infectious disease consultants and clinical pharmacists. This panel of 19 experts appraised 6 major intravenous to oral antibiotic switch therapy criteria and operationalized these criteria using 41 measurable conditions extracted from the literature. The procedure to select a concise set of iv to oral switch criteria included 2 questionnaire rounds and a face-to-face meeting. Results: The procedure resulted in the selection of 16 measurable conditions, which operationalize 6 major intravenous to oral antibiotic switch therapy criteria. The following 6 major switch therapy criteria were selected: (1) Vital signs should be good or improving when bad. (2) Signs and symptoms related to the infection have to be resolved or improved. (3) The gastrointestinal tract has to be intact and functioning. (4) The oral route should not be compromised. (5) Absence of contra-indicated infections. (6) An oral variant of the antibiotic with good bioavailability has to exist. Conclusions: This systematic stepwise method which combined evidence and expert opinion resulted in a feasible set of 6 major intravenous to oral antibiotic switch therapy criteria operationalized by 16 measurable conditions. This set of early antibiotic iv to oral switch criteria can be used in daily practice in all adult hospital patients. Future use in audits and as rules in computer assisted decision support systems will lead to improvement of antimicrobial steward ship programs.Keywords: antibiotic resistance, antibiotic stewardship, intravenous to oral, switch therapy
Procedia PDF Downloads 356