Search results for: computer aided instructional package
679 Analyzing Information Management in Science and Technology Institute Libraries in India
Authors: P. M. Naushad Ali
Abstract:
India’s strength in basic research is recognized internationally. Science and Technology research in India has been performed by six distinct bodies or organizations such as Cooperative Research Associations, Autonomous Research Council, Institute under Ministries, Industrial R&D Establishment, Universities, Private Institutions. All most all these institutions are having a well-established library/information center to cater the information needs of their users like scientists and technologists. Information Management (IM) comprises disciplines concerned with the study and the effective and efficient management of information and resources, products and services as well as the understanding of the involved technologies and the people engaged in this activity. It is also observed that the libraries and information centers in India are also using modern technologies for the management of various activities and services to serve their users in a better way. Science and Technology libraries in the country are usually better equipped because the investment in Science and Technology in the country are much larger than those in other fields. Thus, most of the Science and Technology libraries are equipped with modern IT-based tools for handling and management of library services. In spite of these facts Science and Technology libraries are having all the characteristics of a model organization where computer application is found most successful, however, the adoption of this IT based management tool is not uniform in these libraries. The present study will help to know about the level use of IT-based management tools for the information management of Science and Technology libraries in India. The questionnaire, interview, observation and document review techniques have been used in data collection. Finally, the author discusses findings of the study and put forward some suggestions to improve the quality of Science and Technology institute library services in India.Keywords: information management, science and technology libraries, India, IT-based tools
Procedia PDF Downloads 394678 Importance of Human Factors on Cybersecurity within Organizations: A Study of Attitudes and Behaviours
Authors: Elham Rajabian
Abstract:
The ascent of cybersecurity incidents is a rising threat to most organisations in general, while the impact of the incidents is unique to each of the organizations. It is a need for behavioural sciences to concentrate on employees’ behaviour in order to prepare key security mitigation opinions versus cybersecurity incidents. There are noticeable differences among users of a computer system in terms of complying with security behaviours. We can discuss the people's differences under several subjects such as delaying tactics on something that must be done, the tendency to act without thinking, future thinking about unexpected implications of present-day issues, and risk-taking behaviours in security policies compliance. In this article, we introduce high-profile cyber-attacks and their impacts on weakening cyber resiliency in organizations. We also give attention to human errors that influence network security. Human errors are discussed as a part of psychological matters to enhance compliance with the security policies. The organizational challenges are studied in order to shape a sustainable cyber risks management approach in the related work section. Insiders’ behaviours are viewed as a cyber security gap to draw proper cyber resiliency in section 3. We carry out the best cybersecurity practices by discussing four CIS challenges in section 4. In this regard, we provide a guideline and metrics to measure cyber resilience in organizations in section 5. In the end, we give some recommendations in order to build a cybersecurity culture based on individual behaviours.Keywords: cyber resilience, human factors, cybersecurity behavior, attitude, usability, security culture
Procedia PDF Downloads 97677 Investigation of Cost Effective Double Layered Slab for γ-Ray Shielding
Authors: Kulwinder Singh Mann, Manmohan Singh Heer, Asha Rani
Abstract:
The safe storage of radioactive materials has become an important issue. Nuclear engineering necessitates the safe handling of radioactive materials emitting high energy gamma-rays. Hazards involved in handling radioactive materials insist suitable shielded enclosures. With overgrowing use of nuclear energy for meeting the increasing demand of power, there is a need to investigate the shielding behavior of cost effective shielded enclosure (CESE) made from clay-bricks (CB) and fire-bricks (FB). In comparison to the lead-bricks (conventional-shielding), the CESE are the preferred choice in nuclear waste management. The objective behind the present investigation is to evaluate the double layered transmission exposure buildup factors (DLEBF) for gamma-rays for CESE in energy range 0.5-3MeV. For necessary computations of shielding parameters, using existing huge data regarding gamma-rays interaction parameters of all periodic table elements, two computer programs (GRIC-toolkit and BUF-toolkit) have been designed. It has been found that two-layered slabs show effective shielding for gamma-rays in orientation CB followed by FB than the reverse. It has been concluded that the arrangement, FB followed by CB reduces the leakage of scattered gamma-rays from the radioactive source.Keywords: buildup factor, clay bricks, fire bricks, nuclear wastage management, radiation protective double layered slabs
Procedia PDF Downloads 408676 The Factors for Developing Trainers in Auto Parts Manufacturing Factories at Amata Nakon Industrial Estate in Cholburi Province
Authors: Weerakarj Dokchan
Abstract:
The purposes of this research are to find out the factors for developing trainers in the auto part manufacturing factories (AMF) in Amata Nakon Industrial Estate Cholburi. Population in this study included 148 operators to complete the questionnaires and 10 trainers to provide the information on the interview. The research statistics consisted of percentage, mean, standard deviation and step-wise multiple linear regression analysis.The analysis of the training model revealed that: The research result showed that the development factors of trainers in AMF consisted of 3 main factors and 8 sub-factors: 1) knowledge competency consisting of 4 sub-factors; arrangement of critical thinking, organizational loyalty, working experience of the trainers, analysis of behavior, and work and organization loyalty which could predict the success of the trainers at 55.60%. 2) Skill competency consisted of 4 sub-factors, arrangement of critical thinking, organizational loyalty and analysis of behavior and work and the development of emotional quotient. These 4 sub-factors could predict the success of the trainers in skill aspect 55.90%. 3) The attitude competency consisted of 4 sub-factors, arrangement of critical thinking, intention of trainee computer competency and teaching psychology. In conclusion, these 4 sub-factors could predict the success of the trainers in attitude aspect 58.50%.Keywords: the development factors, trainers development, trainer competencies, auto part manufacturing factory (AMF), AmataNakon Industrial Estate Cholburi
Procedia PDF Downloads 306675 Cantilever Secant Pile Constructed in Sand: Numerical Comparative Study and Design Aids – Part II
Authors: Khaled R. Khater
Abstract:
All civil engineering projects include excavation work and therefore need some retaining structures. Cantilever secant pile walls are an economical supporting system up to 5.0-m depths. The parameters controlling wall tip displacement are the focus of this paper. So, two analysis techniques have been investigated and arbitrated. They are the conventional method and finite element analysis. Accordingly, two computer programs have been used, Excel sheet and Plaxis-2D. Two soil models have been used throughout this study. They are Mohr-Coulomb soil model and Isotropic Hardening soil models. During this study, two soil densities have been considered, i.e. loose and dense sand. Ten wall rigidities have been analyzed covering ranges of perfectly flexible to completely rigid walls. Three excavation depths, i.e. 3.0-m, 4.0-m and 5.0-m were tested to cover the practical range of secant piles. This work submits beneficial hints about secant piles to assist designers and specification committees. Also, finite element analysis, isotropic hardening, is recommended to be the fair judge when two designs conflict. A rational procedure using empirical equations has been suggested to upgrade the conventional method to predict wall tip displacement ‘δ’. Also, a reasonable limitation of ‘δ’ as a function of excavation depth, ‘h’ has been suggested. Also, it has been found that, after a certain penetration depth any further increase of it does not positively affect the wall tip displacement, i.e. over design and uneconomic.Keywords: design aids, numerical analysis, secant pile, Wall tip displacement
Procedia PDF Downloads 191674 A Framework for Chinese Domain-Specific Distant Supervised Named Entity Recognition
Abstract:
The Knowledge Graphs have now become a new form of knowledge representation. However, there is no consensus in regard to a plausible and definition of entities and relationships in the domain-specific knowledge graph. Further, in conjunction with several limitations and deficiencies, various domain-specific entities and relationships recognition approaches are far from perfect. Specifically, named entity recognition in Chinese domain is a critical task for the natural language process applications. However, a bottleneck problem with Chinese named entity recognition in new domains is the lack of annotated data. To address this challenge, a domain distant supervised named entity recognition framework is proposed. The framework is divided into two stages: first, the distant supervised corpus is generated based on the entity linking model of graph attention neural network; secondly, the generated corpus is trained as the input of the distant supervised named entity recognition model to train to obtain named entities. The link model is verified in the ccks2019 entity link corpus, and the F1 value is 2% higher than that of the benchmark method. The re-pre-trained BERT language model is added to the benchmark method, and the results show that it is more suitable for distant supervised named entity recognition tasks. Finally, it is applied in the computer field, and the results show that this framework can obtain domain named entities.Keywords: distant named entity recognition, entity linking, knowledge graph, graph attention neural network
Procedia PDF Downloads 95673 Monocular Depth Estimation Benchmarking with Thermal Dataset
Authors: Ali Akyar, Osman Serdar Gedik
Abstract:
Depth estimation is a challenging computer vision task that involves estimating the distance between objects in a scene and the camera. It predicts how far each pixel in the 2D image is from the capturing point. There are some important Monocular Depth Estimation (MDE) studies that are based on Vision Transformers (ViT). We benchmark three major studies. The first work aims to build a simple and powerful foundation model that deals with any images under any condition. The second work proposes a method by mixing multiple datasets during training and a robust training objective. The third work combines generalization performance and state-of-the-art results on specific datasets. Although there are studies with thermal images too, we wanted to benchmark these three non-thermal, state-of-the-art studies with a hybrid image dataset which is taken by Multi-Spectral Dynamic Imaging (MSX) technology. MSX technology produces detailed thermal images by bringing together the thermal and visual spectrums. Using this technology, our dataset images are not blur and poorly detailed as the normal thermal images. On the other hand, they are not taken at the perfect light conditions as RGB images. We compared three methods under test with our thermal dataset which was not done before. Additionally, we propose an image enhancement deep learning model for thermal data. This model helps extract the features required for monocular depth estimation. The experimental results demonstrate that, after using our proposed model, the performance of these three methods under test increased significantly for thermal image depth prediction.Keywords: monocular depth estimation, thermal dataset, benchmarking, vision transformers
Procedia PDF Downloads 34672 Helicopter Exhaust Gases Cooler in Terms of Computational Fluid Dynamics (CFD) Analysis
Authors: Mateusz Paszko, Ksenia Siadkowska
Abstract:
Due to the low-altitude and relatively low-speed flight, helicopters are easy targets for actual combat assets e.g. infrared-guided missiles. Current techniques aim to increase the combat effectiveness of the military helicopters. Protection of the helicopter in flight from early detection, tracking and finally destruction can be realized in many ways. One of them is cooling hot exhaust gasses, emitting from the engines to the atmosphere in special heat exchangers. Nowadays, this process is realized in ejective coolers, where strong heat and momentum exchange between hot exhaust gases and cold air ejected from atmosphere takes place. Flow effects of air, exhaust gases; mixture of those two and the heat transfer between cold air and hot exhaust gases are given by differential equations of: Mass transportation–flow continuity, ejection of cold air through expanding exhaust gasses, conservation of momentum, energy and physical relationship equations. Calculation of those processes in ejective cooler by means of classic mathematical analysis is extremely hard or even impossible. Because of this, it is necessary to apply the numeric approach with modern, numeric computer programs. The paper discussed the general usability of the Computational Fluid Dynamics (CFD) in a process of projecting the ejective exhaust gases cooler cooperating with helicopter turbine engine. In this work, the CFD calculations have been performed for ejective-based cooler cooperating with the PA W3 helicopter’s engines.Keywords: aviation, CFD analysis, ejective-cooler, helicopter techniques
Procedia PDF Downloads 334671 Mitigation of Interference in Satellite Communications Systems via a Cross-Layer Coding Technique
Authors: Mario A. Blanco, Nicholas Burkhardt
Abstract:
An important problem in satellite communication systems which operate in the Ka and EHF frequency bands consists of the overall degradation in link performance of mobile terminals due to various types of degradations in the link/channel, such as fading, blockage of the link to the satellite (especially in urban environments), intentional as well as other types of interference, etc. In this paper, we focus primarily on the interference problem, and we develop a very efficient and cost-effective solution based on the use of fountain codes. We first introduce a satellite communications (SATCOM) terminal uplink interference channel model that is classically used against communication systems that use spread-spectrum waveforms. We then consider the use of fountain codes, with focus on Raptor codes, as our main mitigation technique to combat the degradation in link/receiver performance due to the interference signal. The performance of the receiver is obtained in terms of average probability of bit and message error rate as a function of bit energy-to-noise density ratio, Eb/N0, and other parameters of interest, via a combination of analysis and computer simulations, and we show that the use of fountain codes is extremely effective in overcoming the effects of intentional interference on the performance of the receiver and associated communication links. We then show this technique can be extended to mitigate other types of SATCOM channel degradations, such as those caused by channel fading, shadowing, and hard-blockage of the uplink signal.Keywords: SATCOM, interference mitigation, fountain codes, turbo codes, cross-layer
Procedia PDF Downloads 362670 Design and Simulation of Low Cost Boost-Half- Bridge Microinverter with Grid Connection
Authors: P. Bhavya, P. R. Jayasree
Abstract:
This paper presents a low cost transformer isolated boost half bridge micro-inverter for single phase grid connected PV system. Since the output voltage of a single PV panel is as low as 20~50V, a high voltage gain inverter is required for the PV panel to connect to the single-phase grid. The micro-inverter has two stages, an isolated dc-dc converter stage and an inverter stage with a dc link. To achieve MPPT and to step up the PV voltage to the dc link voltage, a transformer isolated boost half bridge dc-dc converter is used. To output the synchronised sinusoidal current with unity power factor to the grid, a pulse width modulated full bridge inverter with LCL filter is used. Variable step size Maximum Power Point Tracking (MPPT) method is adopted such that fast tracking and high MPPT efficiency are both obtained. AC voltage as per grid requirement is obtained at the output of the inverter. High power factor (>0.99) is obtained at both heavy and light loads. This paper gives the results of computer simulation program of a grid connected solar PV system using MATLAB/Simulink and SIM Power System tool.Keywords: boost-half-bridge, micro-inverter, maximum power point tracking, grid connection, MATLAB/Simulink
Procedia PDF Downloads 341669 Accentuation Moods of Blaming Utterances in Egyptian Arabic: A Pragmatic Study of Prosodic Focus
Authors: Reda A. H. Mahmoud
Abstract:
This paper investigates the pragmatic meaning of prosodic focus through four accentuation moods of blaming utterances in Egyptian Arabic. Prosodic focus results in various pragmatic meanings when the speaker utters the same blaming expression in different emotional moods: the angry, the mocking, the frustrated, and the informative moods. The main objective of this study is to interpret the meanings of these four accentuation moods in relation to their illocutionary forces and pre-locutionary effects, the integrated features of prosodic focus (e.g., tone movement distributions, pitch accents, lengthening of vowels, deaccentuation of certain syllables/words, and tempo), and the consonance between the former prosodic features and certain lexico-grammatical components to communicate the intentions of the speaker. The data on blaming utterances has been collected via elicitation and pre-recorded material, and the selection of blaming utterances is based on the criteria of lexical and prosodic regularity to be processed and verified by three computer programs, Praat, Speech Analyzer, and Spectrogram Freeware. A dual pragmatic approach is established to interpret expressive blaming utterance and their lexico-grammatical distributions into intonational focus structure units. The pragmatic component of this approach explains the variable psychological attitudes through the expressions of blaming and their effects whereas the analysis of prosodic focus structure is used to describe the intonational contours of blaming utterances and other prosodic features. The study concludes that every accentuation mood has its different prosodic configuration which influences the listener’s interpretation of the pragmatic meanings of blaming utterances.Keywords: pragmatics, pragmatic interpretation, prosody, prosodic focus
Procedia PDF Downloads 154668 Hand Gesture Interface for PC Control and SMS Notification Using MEMS Sensors
Authors: Keerthana E., Lohithya S., Harshavardhini K. S., Saranya G., Suganthi S.
Abstract:
In an epoch of expanding human-machine interaction, the development of innovative interfaces that bridge the gap between physical gestures and digital control has gained significant momentum. This study introduces a distinct solution that leverages a combination of MEMS (Micro-Electro-Mechanical Systems) sensors, an Arduino Mega microcontroller, and a PC to create a hand gesture interface for PC control and SMS notification. The core of the system is an ADXL335 MEMS accelerometer sensor integrated with an Arduino Mega, which communicates with a PC via a USB cable. The ADXL335 provides real-time acceleration data, which is processed by the Arduino to detect specific hand gestures. These gestures, such as left, right, up, down, or custom patterns, are interpreted by the Arduino, and corresponding actions are triggered. In the context of SMS notifications, when a gesture indicative of a new SMS is recognized, the Arduino relays this information to the PC through the serial connection. The PC application, designed to monitor the Arduino's serial port, displays these SMS notifications in the serial monitor. This study offers an engaging and interactive means of interfacing with a PC by translating hand gestures into meaningful actions, opening up opportunities for intuitive computer control. Furthermore, the integration of SMS notifications adds a practical dimension to the system, notifying users of incoming messages as they interact with their computers. The use of MEMS sensors, Arduino, and serial communication serves as a promising foundation for expanding the capabilities of gesture-based control systems.Keywords: hand gestures, multiple cables, serial communication, sms notification
Procedia PDF Downloads 71667 Visual Speech Perception of Arabic Emphatics
Authors: Maha Saliba Foster
Abstract:
Speech perception has been recognized as a bi-sensory process involving the auditory and visual channels. Compared to the auditory modality, the contribution of the visual signal to speech perception is not very well understood. Studying how the visual modality affects speech recognition can have pedagogical implications in second language learning, as well as clinical application in speech therapy. The current investigation explores the potential effect of speech visual cues on the perception of Arabic emphatics (AEs). The corpus consists of 36 minimal pairs each containing two contrasting consonants, an AE versus a non-emphatic (NE). Movies of four Lebanese speakers were edited to allow perceivers to have partial view of facial regions: lips only, lips-cheeks, lips-chin, lips-cheeks-chin, lips-cheeks-chin-neck. In the absence of any auditory information and relying solely on visual speech, perceivers were above chance at correctly identifying AEs or NEs across vowel contexts; moreover, the models were able to predict the probability of perceivers’ accuracy in identifying some of the COIs produced by certain speakers; additionally, results showed an overlap between the measurements selected by the computer and those selected by human perceivers. The lack of significant face effect on the perception of AEs seems to point to the lips, present in all of the videos, as the most important and often sufficient facial feature for emphasis recognition. Future investigations will aim at refining the analyses of visual cues used by perceivers by using Principal Component Analysis and including time evolution of facial feature measurements.Keywords: Arabic emphatics, machine learning, speech perception, visual speech perception
Procedia PDF Downloads 307666 Relationship between Right Brain and Left Brain Dominance and Intonation Learning
Authors: Mohammad Hadi Mahmoodi, Soroor Zekrati
Abstract:
The aim of this study was to investigate the relationship between hemispheric dominance and intonation learning of Iranian EFL students. In order to gain this goal, 52 female students from three levels of beginner, elementary and intermediate in Paradise Institute, and 18 male university students at Bu-Ali Sina University constituted the sample. In order to assist students learn the correct way of applying intonation to their everyday speech, the study proposed an interactive approach and provided students with visual aid through which they were able to see the intonation pattern on computer screen using 'Speech Analyzer' software. This software was also used to record subjects’ voice and compare them with the original intonation pattern. Edinburg Handedness Questionnaire (EHD), which ranges from –100 for strong left-handedness to +100 for strong right-handedness was used to indicate the hemispheric dominance of each student. The result of an independent sample t-test indicated that girls learned intonation pattern better than boys, and that right brained students significantly outperformed the left brained ones. Using one-way ANOVA, a significant difference between three proficiency levels was also found. The posthoc Scheffer test showed that the exact difference was between intermediate and elementary, and intermediate and beginner levels, but no significant difference was observed between elementary and beginner levels. The findings of the study might provide researchers with some helpful implications and useful directions for future investigation into the domain of the relationship between mind and second language learning.Keywords: intonation, hemispheric dominance, visual aid, language learning, second language learning
Procedia PDF Downloads 519665 Tomato-Weed Classification by RetinaNet One-Step Neural Network
Authors: Dionisio Andujar, Juan lópez-Correa, Hugo Moreno, Angela Ri
Abstract:
The increased number of weeds in tomato crops highly lower yields. Weed identification with the aim of machine learning is important to carry out site-specific control. The last advances in computer vision are a powerful tool to face the problem. The analysis of RGB (Red, Green, Blue) images through Artificial Neural Networks had been rapidly developed in the past few years, providing new methods for weed classification. The development of the algorithms for crop and weed species classification looks for a real-time classification system using Object Detection algorithms based on Convolutional Neural Networks. The site study was located in commercial corn fields. The classification system has been tested. The procedure can detect and classify weed seedlings in tomato fields. The input to the Neural Network was a set of 10,000 RGB images with a natural infestation of Cyperus rotundus l., Echinochloa crus galli L., Setaria italica L., Portulaca oeracea L., and Solanum nigrum L. The validation process was done with a random selection of RGB images containing the aforementioned species. The mean average precision (mAP) was established as the metric for object detection. The results showed agreements higher than 95 %. The system will provide the input for an online spraying system. Thus, this work plays an important role in Site Specific Weed Management by reducing herbicide use in a single step.Keywords: deep learning, object detection, cnn, tomato, weeds
Procedia PDF Downloads 106664 Progress in Combining Image Captioning and Visual Question Answering Tasks
Authors: Prathiksha Kamath, Pratibha Jamkhandi, Prateek Ghanti, Priyanshu Gupta, M. Lakshmi Neelima
Abstract:
Combining Image Captioning and Visual Question Answering (VQA) tasks have emerged as a new and exciting research area. The image captioning task involves generating a textual description that summarizes the content of the image. VQA aims to answer a natural language question about the image. Both these tasks include computer vision and natural language processing (NLP) and require a deep understanding of the content of the image and semantic relationship within the image and the ability to generate a response in natural language. There has been remarkable growth in both these tasks with rapid advancement in deep learning. In this paper, we present a comprehensive review of recent progress in combining image captioning and visual question-answering (VQA) tasks. We first discuss both image captioning and VQA tasks individually and then the various ways in which both these tasks can be integrated. We also analyze the challenges associated with these tasks and ways to overcome them. We finally discuss the various datasets and evaluation metrics used in these tasks. This paper concludes with the need for generating captions based on the context and captions that are able to answer the most likely asked questions about the image so as to aid the VQA task. Overall, this review highlights the significant progress made in combining image captioning and VQA, as well as the ongoing challenges and opportunities for further research in this exciting and rapidly evolving field, which has the potential to improve the performance of real-world applications such as autonomous vehicles, robotics, and image search.Keywords: image captioning, visual question answering, deep learning, natural language processing
Procedia PDF Downloads 74663 A Deep Learning Approach to Online Social Network Account Compromisation
Authors: Edward K. Boahen, Brunel E. Bouya-Moko, Changda Wang
Abstract:
The major threat to online social network (OSN) users is account compromisation. Spammers now spread malicious messages by exploiting the trust relationship established between account owners and their friends. The challenge in detecting a compromised account by service providers is validating the trusted relationship established between the account owners, their friends, and the spammers. Another challenge is the increase in required human interaction with the feature selection. Research available on supervised learning (machine learning) has limitations with the feature selection and accounts that cannot be profiled, like application programming interface (API). Therefore, this paper discusses the various behaviours of the OSN users and the current approaches in detecting a compromised OSN account, emphasizing its limitations and challenges. We propose a deep learning approach that addresses and resolve the constraints faced by the previous schemes. We detailed our proposed optimized nonsymmetric deep auto-encoder (OPT_NDAE) for unsupervised feature learning, which reduces the required human interaction levels in the selection and extraction of features. We evaluated our proposed classifier using the NSL-KDD and KDDCUP'99 datasets in a graphical user interface enabled Weka application. The results obtained indicate that our proposed approach outperformed most of the traditional schemes in OSN compromised account detection with an accuracy rate of 99.86%.Keywords: computer security, network security, online social network, account compromisation
Procedia PDF Downloads 119662 Biogeography Based CO2 and Cost Optimization of RC Cantilever Retaining Walls
Authors: Ibrahim Aydogdu, Alper Akin
Abstract:
In this study, the development of minimizing the cost and the CO2 emission of the RC retaining wall design has been performed by Biogeography Based Optimization (BBO) algorithm. This has been achieved by developing computer programs utilizing BBO algorithm which minimize the cost and the CO2 emission of the RC retaining walls. Objective functions of the optimization problem are defined as the minimized cost, the CO2 emission and weighted aggregate of the cost and the CO2 functions of the RC retaining walls. In the formulation of the optimum design problem, the height and thickness of the stem, the length of the toe projection, the thickness of the stem at base level, the length and thickness of the base, the depth and thickness of the key, the distance from the toe to the key, the number and diameter of the reinforcement bars are treated as design variables. In the formulation of the optimization problem, flexural and shear strength constraints and minimum/maximum limitations for the reinforcement bar areas are derived from American Concrete Institute (ACI 318-14) design code. Moreover, the development length conditions for suitable detailing of reinforcement are treated as a constraint. The obtained optimum designs must satisfy the factor of safety for failure modes (overturning, sliding and bearing), strength, serviceability and other required limitations to attain practically acceptable shapes. To demonstrate the efficiency and robustness of the presented BBO algorithm, the optimum design example for retaining walls is presented and the results are compared to the previously obtained results available in the literature.Keywords: bio geography, meta-heuristic search, optimization, retaining wall
Procedia PDF Downloads 401661 Improving Comfort and Energy Mastery: Application of a Method Based on Indicators Morpho-Energetic
Authors: Khadidja Rahmani, Nahla Bouaziz
Abstract:
The climate change and the economic crisis, which are currently running, are the origin of the emergence of many issues and problems, which are related to the domain of energy and environment in à direct or indirect manner. Since the urban space is the core element and the key to solve the current problem, particular attention is given to it in this study. For this reason, we rented to the later a very particular attention; this is for the opportunities that it provides and that can be invested to attenuate a little this situation, which is disastrous and worried, especially in the face of the requirements of sustainable development. Indeed, the purpose of this work is to develop a method, which will allow us to guide designers towards projects with a certain degree of thermo-aeraulic comfort while requiring a minimum energy consumption. In this context, the architects, the urban planners and the engineers (energeticians) have to collaborate jointly to establish a method based on indicators for the improvement of the urban environmental quality (aeraulic-thermo comfort), correlated with a reduction in the energy demand of the entities that make up this environment, in areas with a sub-humid climate. In order to test the feasibility and to validate the method developed in this work, we carried out a series of simulations using computer-based simulation. This research allows us to evaluate the impact of the use of the indicators in the design of the urban sets, on the economic and ecological plan. Using this method, we prove that an urban design, which carefully considered energetically, can contribute significantly to the preservation of the environment and the reduction of the consumption of energy.Keywords: comfort, energy consumption, energy mastery, morpho-energetic indicators, simulation, sub-humid climate, urban sets
Procedia PDF Downloads 276660 Business and Psychological Principles Integrated into Automated Capital Investment Systems through Mathematical Algorithms
Authors: Cristian Pauna
Abstract:
With few steps away from the 2020, investments in financial markets is a common activity nowadays. In the electronic trading environment, the automated investment software has become a major part in the business intelligence system of any modern financial company. The investment decisions are assisted and/or made automatically by computers using mathematical algorithms today. The complexity of these algorithms requires computer assistance in the investment process. This paper will present several investment strategies that can be automated with algorithmic trading for Deutscher Aktienindex DAX30. It was found that, based on several price action mathematical models used for high-frequency trading some investment strategies can be optimized and improved for automated investments with good results. This paper will present the way to automate these investment decisions. Automated signals will be built using all of these strategies. Three major types of investment strategies were found in this study. The types are separated by the target length and by the exit strategy used. The exit decisions will be also automated and the paper will present the specificity for each investment type. A comparative study will be also included in this paper in order to reveal the differences between strategies. Based on these results, the profit and the capital exposure will be compared and analyzed in order to qualify the investment methodologies presented and to compare them with any other investment system. As conclusion, some major investment strategies will be revealed and compared in order to be considered for inclusion in any automated investment system.Keywords: Algorithmic trading, automated investment systems, limit conditions, trading principles, trading strategies
Procedia PDF Downloads 194659 The Effect of Curcumin on Cryopreserved Bovine Semen
Authors: Eva Tvrdá, Marek Halenár, Hana Greifová, Alica Mackovich, Faridullah Hashim, Norbert Lukáč
Abstract:
Oxidative stress associated with semen cryopreservation may result in lipid peroxidation (LPO), DNA damage and apoptosis, leading to decreased sperm motility and fertilization ability. Curcumin (CUR), a natural phenol isolated from Curcuma longa Linn. has been presented as a possible supplement for a more effective semen cryopreservation because of its antioxidant properties. This study focused to evaluate the effects of CUR on selected oxidative stress parameters in cryopreserved bovine semen. 20 bovine ejaculates were split into two aliquots and diluted with a commercial semen extender containing CUR (50 μmol/L) or no supplement (control), cooled to 4 °C, frozen and kept in liquid nitrogen. Frozen straws were thawed in a water bath for subsequent experiments. Computer assisted semen analysis was used to evaluate spermatozoa motility, and reactive oxygen species (ROS) generation was quantified by using luminometry. Superoxide generation was evaluated with the NBT test, and LPO was assessed via the TBARS assay. CUR supplementation significantly (P<0.001) increased the spermatozoa motility and provided a significantly higher protection against ROS (P<0.001) or superoxide (P<0.01) overgeneration caused by semen freezing and thawing. Furthermore, CUR administration resulted in a significantly (P<0.01) lower LPO of the experimental semen samples. In conclusion, CUR exhibits significant ROS-scavenging activities which may prevent oxidative insults to cryopreserved spermatozoa and thus may enhance the post-thaw functional activity of male gametes.Keywords: bulls, cryopreservation, curcumin, lipid peroxidation, reactive oxygen species, spermatozoa
Procedia PDF Downloads 269658 Investigation on Scattered Dose Rate and Exposure Parameters during Diagnostic Examination Done with an Overcouch X-Ray Tube in Nigerian Teaching Hospital
Authors: Gbenga Martins, Christopher J. Olowookere, Lateef Bamidele, Kehinde O. Olatunji
Abstract:
The aims of this research are to measure the scattered dose rate during an X-ray examination in an X-ray room, compare the scattered dose rate with exposure parameters based on the body region examined, and examine the X-ray examination done with an over couch tube. The research was carried out using Gamma Scout software installation on the computer system (Laptop) to record the radiation counts, pulse rate, and dose rate. The measurement was employed by placing the detector at 900 to the incident X-ray. Proforma was used for the collection of patients’ data such as age, sex, examination type, and initial diagnosis. Data such as focus skin distance (FSD), body mass index (BMI), body thickness of the patients, the beam output (kVp) were collected at Obafemi Awolowo University, Ile-Ife, Western Nigeria. Total number of 136 patients was considered during this research. Dose rate range between 14.21 and 86.78 µSv/h for the plain abdominal region, 85.70 and 2.86 µSv/h for the lumbosacral region,1.3 µSv/yr and 3.6 µSv/yr in the pelvis region, 2.71 µSv/yr and 28.88 µSv/yr for leg region, 3.06 µSv/yr and 29.98 µSv/yr in hand region. The results of this study were compared with those of other studies carried out in other countries. The findings of this study indicated that the number of exposure parameters selected for each diagnostic examination contributed to the dose rate recorded. Therefore, these results call for a quality assurance program (QAP) in diagnostic X-ray units in Nigerian hospitals.Keywords: X-radiation, exposure parameters, dose rate, pulse rate, number of counts, tube current, tube potential, diagnostic examination, scattered radiation
Procedia PDF Downloads 117657 Easymodel: Web-based Bioinformatics Software for Protein Modeling Based on Modeller
Authors: Alireza Dantism
Abstract:
Presently, describing the function of a protein sequence is one of the most common problems in biology. Usually, this problem can be facilitated by studying the three-dimensional structure of proteins. In the absence of a protein structure, comparative modeling often provides a useful three-dimensional model of the protein that is dependent on at least one known protein structure. Comparative modeling predicts the three-dimensional structure of a given protein sequence (target) mainly based on its alignment with one or more proteins of known structure (templates). Comparative modeling consists of four main steps 1. Similarity between the target sequence and at least one known template structure 2. Alignment of target sequence and template(s) 3. Build a model based on alignment with the selected template(s). 4. Prediction of model errors 5. Optimization of the built model There are many computer programs and web servers that automate the comparative modeling process. One of the most important advantages of these servers is that it makes comparative modeling available to both experts and non-experts, and they can easily do their own modeling without the need for programming knowledge, but some other experts prefer using programming knowledge and do their modeling manually because by doing this they can maximize the accuracy of their modeling. In this study, a web-based tool has been designed to predict the tertiary structure of proteins using PHP and Python programming languages. This tool is called EasyModel. EasyModel can receive, according to the user's inputs, the desired unknown sequence (which we know as the target) in this study, the protein sequence file (template), etc., which also has a percentage of similarity with the primary sequence, and its third structure Predict the unknown sequence and present the results in the form of graphs and constructed protein files.Keywords: structural bioinformatics, protein tertiary structure prediction, modeling, comparative modeling, modeller
Procedia PDF Downloads 98656 New Employee on-Boarding Program: Effective Tool for Reducing the Prevalence of Workplace Injuries/Accidents
Authors: U. Ugochukwu, J. Lee, P. Conley
Abstract:
According to a recent survey by the UT Southwestern Workplace Safety Committee, the three most common on-the-job injuries reported by workers at the medical center are musculoskeletal injuries, slip-and-fall injuries and repetitive motion injuries. Last year alone, of the 650 documented workplace injuries and accidents, 45% were seen in employees in their first-two years of employment. UT Southwestern New Employee On-Boarding program was created and modeled to follows OSHA’s model that consist of: determining if training is needed, identifying training needs, identifying goals and objectives, developing learning activities, conducting the training, evaluating program effectiveness, and improving the program. The hospital’s management best practices were recreated to limit and control workplace injuries and accidents. Regular trainings and workshops on workplace safety and compliance were initiated for new employees. Various computer workstations were evaluated and recommendations were made to reduce musculoskeletal disorders. Post exposure protocols and workers protection programs were remodeled for infectious agents and chemicals used in the hospital, and medical surveillance programs were updated, for every emerging threat, to ensure they are in compliance with the US policy, regulatory and standard setting organizations. If ignorance of specific job hazards and of proper work practices is to blame for this higher injury rate, then training will help to provide a solution. Use of this program in training activities is just one of many ways UT Southwestern complied with the OSHA standards that relate to training while enhancing the safety and health of their employees.Keywords: ergonomics, hazard, on-boarding, surveillance, workplace
Procedia PDF Downloads 330655 Disease Level Assessment in Wheat Plots Using a Residual Deep Learning Algorithm
Authors: Felipe A. Guth, Shane Ward, Kevin McDonnell
Abstract:
The assessment of disease levels in crop fields is an important and time-consuming task that generally relies on expert knowledge of trained individuals. Image classification in agriculture problems historically has been based on classical machine learning strategies that make use of hand-engineered features in the top of a classification algorithm. This approach tends to not produce results with high accuracy and generalization to the classes classified by the system when the nature of the elements has a significant variability. The advent of deep convolutional neural networks has revolutionized the field of machine learning, especially in computer vision tasks. These networks have great resourcefulness of learning and have been applied successfully to image classification and object detection tasks in the last years. The objective of this work was to propose a new method based on deep learning convolutional neural networks towards the task of disease level monitoring. Common RGB images of winter wheat were obtained during a growing season. Five categories of disease levels presence were produced, in collaboration with agronomists, for the algorithm classification. Disease level tasks performed by experts provided ground truth data for the disease score of the same winter wheat plots were RGB images were acquired. The system had an overall accuracy of 84% on the discrimination of the disease level classes.Keywords: crop disease assessment, deep learning, precision agriculture, residual neural networks
Procedia PDF Downloads 334654 mm-Wave Wearable Edge Computing Module Hosted by Printed Ridge Gap Waveguide Structures: A Physical Layer Study
Authors: Matthew Kostawich, Mohammed Elmorsy, Mohamed Sayed Sifat, Shoukry Shams, Mahmoud Elsaadany
Abstract:
6G communication systems represent the nominal future extension of current wireless technology, where its impact is extended to touch upon all human activities, including medical, security, and entertainment applications. As a result, human needs are allocated among the highest priority aspects of the system design and requirements. 6G communications is expected to replace all the current video conferencing with interactive virtual reality meetings involving high data-rate transmission merged with massive distributed computing resources. In addition, the current expansion of IoT applications must be mitigated with significant network changes to provide a reasonable Quality of Service (QoS). This directly implies a high demand for Human-Computer Interaction (HCI) through mobile computing modules in future wireless communication systems. This article proposes the utilization of a Printed Ridge Gap Waveguide (PRGW) to host the wearable nodes. To the best of our knowledge, we propose for the first time a physical layer analysis within the context of a complete architecture. A thorough study is provided on the impact of the distortion of the guiding structure on the overall system performance. The proposed structure shows small latency and small losses, highlighting its compatibility with future applications.Keywords: ridge gap waveguide, edge computing module, 6G, multimedia IoT applications
Procedia PDF Downloads 74653 Work in the Industry of the Future-Investigations of Human-Machine Interactions
Authors: S. Schröder, P. Ennen, T. Langer, S. Müller, M. Shehadeh, M. Haberstroh, F. Hees
Abstract:
Since a bit over a year ago, Festo AG and Co. KG, Festo Didactic SE, robomotion GmbH, the researchers of the Cybernetics-Lab IMA/ZLW and IfU, as well as the Human-Computer Interaction Center at the RWTH Aachen University, have been working together in the focal point of assembly competences to realize different scenarios in the field of human-machine interaction (HMI). In the framework of project ARIZ, questions concerning the future of production within the fourth industrial revolution are dealt with. There are many perspectives of human-robot collaboration that consist Industry 4.0 on an individual, organization and enterprise level, and these will be addressed in ARIZ. The aim of the ARIZ projects is to link AI-Approaches to assembly problems and to implement them as prototypes in demonstrators. To do so, island and flow based production scenarios will be simulated and realized as prototypes. These prototypes will serve as applications of flexible robotics as well as AI-based planning and control of production process. Using the demonstrators, human interaction strategies will be examined with an information system on one hand, and a robotic system on the other. During the tests, prototypes of workspaces that illustrate prospective production work forms will be represented. The human being will remain a central element in future productions and will increasingly be in charge of managerial tasks. Questions thus arise within the overall perspective, primarily concerning the role of humans within these technological revolutions, as well as their ability to act and design respectively to the acceptance of such systems. Roles, such as the 'Trainer' of intelligent systems may become a possibility in such assembly scenarios.Keywords: human-machine interaction, information technology, island based production, assembly competences
Procedia PDF Downloads 208652 Sustainability of Ecotourism Related Activities in the Town of Yercaud: A Modeling Study
Authors: Manoj Gupta Charan Pushparaj
Abstract:
Tourism related activities are getting popular day by day and tourism has become an integral part of everyone’s life. Ecotourism initiatives have grown enormously in the past decade, and the concept of ecotourism has shown to bring great benefits in terms of environment conservation and to improve the livelihood of local people. However, the potential of ecotourism to sustain improving the livelihood of the local population in the remote future is a topic of active debate. A primary challenge that exists in this regard is the enormous costs of limiting the impacts of tourism related activities on the environment. Here we employed systems modeling approach using computer simulations to determine if ecotourism activities in the small hill town of Yercaud (Tamil Nadu, India) can be sustained over years in improving the livelihood of the local population. Increasing damage to the natural environment as a result of tourism-related activities have plagued the pristine hill station of Yercaud. Though ecotourism efforts can help conserve the environment and enrich local population, questions remain if this can be sustained in the distant future. The vital state variables in the model are the existing tourism foundation (labor, services available to tourists, etc.,) in the town of Yercaud and its natural environment (water, flora and fauna). Another state variable is the textile industry that drives the local economy. Our results would help to understand if environment conservation efforts are sustainable in Yercaud and would also offer suggestions to make it sustainable over the course of several years.Keywords: ecotourism, simulations, modeling, Yercaud
Procedia PDF Downloads 275651 Using Building Information Modelling to Mitigate Risks Associated with Health and Safety in the Construction and Maintenance of Infrastructure Assets
Authors: Mohammed Muzafar, Darshan Ruikar
Abstract:
BIM, an acronym for Building Information Modelling relates to the practice of creating a computer generated model which is capable of displaying the planning, design, construction and operation of a structure. The resulting simulation is a data-rich, object-oriented, intelligent and parametric digital representation of the facility, from which views and data, appropriate to various users needs can be extracted and analysed to generate information that can be used to make decisions and to improve the process of delivering the facility. BIM also refers to a shift in culture that will influence the way the built environment and infrastructure operates and how it is delivered. One of the main issues of concern in the construction industry at present in the UK is its record on Health & Safety (H&S). It is, therefore, important that new technologies such as BIM are developed to help improve the quality of health and safety. Historically the H&S record of the construction industry in the UK is relatively poor as compared to the manufacturing industries. BIM and the digital environment it operates within now allow us to use design and construction data in a more intelligent way. It allows data generated by the design process to be re-purposed and contribute to improving efficiencies in other areas of a project. This evolutionary step in design is not only creating exciting opportunities for the designers themselves but it is also creating opportunity for every stakeholder in any given project. From designers, engineers, contractors through to H&S managers, BIM is accelerating a cultural change. The paper introduces the concept behind a research project that mitigates the H&S risks associated with the construction, operation and maintenance of assets through the adoption of BIM.Keywords: building information modeling, BIM levels, health, safety, integration
Procedia PDF Downloads 255650 Enhancer: An Effective Transformer Architecture for Single Image Super Resolution
Authors: Pitigalage Chamath Chandira Peiris
Abstract:
A widely researched domain in the field of image processing in recent times has been single image super-resolution, which tries to restore a high-resolution image from a single low-resolution image. Many more single image super-resolution efforts have been completed utilizing equally traditional and deep learning methodologies, as well as a variety of other methodologies. Deep learning-based super-resolution methods, in particular, have received significant interest. As of now, the most advanced image restoration approaches are based on convolutional neural networks; nevertheless, only a few efforts have been performed using Transformers, which have demonstrated excellent performance on high-level vision tasks. The effectiveness of CNN-based algorithms in image super-resolution has been impressive. However, these methods cannot completely capture the non-local features of the data. Enhancer is a simple yet powerful Transformer-based approach for enhancing the resolution of images. A method for single image super-resolution was developed in this study, which utilized an efficient and effective transformer design. This proposed architecture makes use of a locally enhanced window transformer block to alleviate the enormous computational load associated with non-overlapping window-based self-attention. Additionally, it incorporates depth-wise convolution in the feed-forward network to enhance its ability to capture local context. This study is assessed by comparing the results obtained for popular datasets to those obtained by other techniques in the domain.Keywords: single image super resolution, computer vision, vision transformers, image restoration
Procedia PDF Downloads 106