Search results for: precision application
7197 Environmental Controls on the Distribution of Intertidal Foraminifers in Sabkha Al-Kharrar, Saudi Arabia: Implications for Sea-Level Changes
Authors: Talha A. Al-Dubai, Rashad A. Bantan, Ramadan H. Abu-Zied, Brian G. Jones, Aaid G. Al-Zubieri
Abstract:
Contemporary foraminiferal samples sediments were collected from the intertidal sabkha of Al-Kharrar Lagoon, Saudi Arabia, to study the vertical distribution of Foraminifera and, based on a modern training set, their potential to develop a predictor of former sea-level changes in the area. Based on hierarchical cluster analysis, the intertidal sabkha is divided into three vertical zones (A, B & C) represented by three foraminiferal assemblages, where agglutinated species occupied Zone A and calcareous species occupied the other two zones. In Zone A (high intertidal), Agglutinella compressa, Clavulina angularis and C. multicamerata are dominant species with a minor presence of Peneroplis planatus, Coscinospira hemprichii, Sorites orbiculus, Quinqueloculina lamarckiana, Q. seminula, Ammonia convexa and A. tepida. In contrast, in Zone B (middle intertidal) the most abundant species are P. planatus, C. hemprichii, S. orbiculus, Q. lamarckiana, Q. seminula and Q. laevigata, while Zone C (low intertidal) is characterised by C. hemprichii, Q. costata, S. orbiculus, P. planatus, A. convexa, A. tepida, Spiroloculina communis and S. costigera. A transfer function for sea-level reconstruction was developed using a modern dataset of 75 contemporary sediment samples and 99 species collected from several transects across the sabkha. The model provided an error of 0.12m, suggesting that intertidal foraminifers are able to predict the past sea-level changes with high precision in Al-Kharrar Lagoon, and thus the future prediction of those changes in the area.Keywords: Lagoonal foraminifers, intertidal sabkha, vertical zonation, transfer function, sea level
Procedia PDF Downloads 1737196 Determination of a Novel Artificial Sweetener Advantame in Food by Liquid Chromatography Tandem Mass Spectrometry
Authors: Fangyan Li, Lin Min Lee, Hui Zhu Peh, Shoet Harn Chan
Abstract:
Advantame, a derivative of aspartame, is the latest addition to a family of low caloric and high potent dipeptide sweeteners which include aspartame, neotame and alitame. The use of advantame as a high-intensity sweetener in food was first accepted by Food Standards Australia New Zealand in 2011 and subsequently by US and EU food authorities in 2014, with the results from toxicity and exposure studies showing advantame poses no safety concern to the public at regulated levels. To our knowledge, currently there is barely any detailed information on the analytical method of advantame in food matrix, except for one report published in Japanese, stating a high performance liquid chromatography (HPLC) and liquid chromatography/ mass spectrometry (LC-MS) method with a detection limit at ppm level. However, the use of acid in sample preparation and instrumental analysis in the report raised doubt over the reliability of the method, as there is indication that stability of advantame is compromised under acidic conditions. Besides, the method may not be suitable for analyzing food matrices containing advantame at low ppm or sub-ppm level. In this presentation, a simple, specific and sensitive method for the determination of advantame in food is described. The method involved extraction with water and clean-up via solid phase extraction (SPE) followed by detection using liquid chromatography tandem mass spectrometry (LC-MS/MS) in negative electrospray ionization mode. No acid was used in the entire procedure. Single laboratory validation of the method was performed in terms of linearity, precision and accuracy. A low detection limit at ppb level was achieved. Satisfactory recoveries were obtained using spiked samples at three different concentration levels. This validated method could be used in the routine inspection of the advantame level in food.Keywords: advantame, food, LC-MS/MS, sweetener
Procedia PDF Downloads 4797195 Physical Characterization of a Watershed for Correlation with Parameters of Thomas Hydrological Model and Its Application in Iber Hidrodinamic Model
Authors: Carlos Caro, Ernest Blade, Nestor Rojas
Abstract:
This study determined the relationship between basic geo-technical parameters and parameters of the hydro logical model Thomas for water balance of rural watersheds, as a methodological calibration application, applicable in distributed models as IBER model, which represents a distributed system simulation models for unsteady flow numerical free surface. There was an exploration in 25 points (on 15 sub) basin of Rio Piedras (Boy.) obtaining soil samples, to which geo-technical characterization was performed by laboratory tests. Thomas model has a physical characterization of the input area by only four parameters (a, b, c, d). Achieve measurable relationship between geo technical parameters and 4 values of hydro logical parameters helps to determine subsurface, underground and surface flow more agile manner. It is intended in this way to reach some solutions regarding limits initial model parameters on the basis of Thomas geo-technical characterization. In hydro geological models of rural watersheds, calibration is an important process in the characterization of the study area. This step can require a significant computational cost and time, especially if the initial values or parameters before calibration are outside of the geo-technical reality. A better approach in these initial values means optimization of these process through a geo-technical materials area, where is obtained an important approach to the study as in the starting range of variation for the calibration parameters.Keywords: distributed hydrology, hydrological and geotechnical characterization, Iber model
Procedia PDF Downloads 5257194 A Deep Learning Approach to Detect Complete Safety Equipment for Construction Workers Based on YOLOv7
Authors: Shariful Islam, Sharun Akter Khushbu, S. M. Shaqib, Shahriar Sultan Ramit
Abstract:
In the construction sector, ensuring worker safety is of the utmost significance. In this study, a deep learning-based technique is presented for identifying safety gear worn by construction workers, such as helmets, goggles, jackets, gloves, and footwear. The suggested method precisely locates these safety items by using the YOLO v7 (You Only Look Once) object detection algorithm. The dataset utilized in this work consists of labeled images split into training, testing and validation sets. Each image has bounding box labels that indicate where the safety equipment is located within the image. The model is trained to identify and categorize the safety equipment based on the labeled dataset through an iterative training approach. We used custom dataset to train this model. Our trained model performed admirably well, with good precision, recall, and F1-score for safety equipment recognition. Also, the model's evaluation produced encouraging results, with a [email protected] score of 87.7%. The model performs effectively, making it possible to quickly identify safety equipment violations on building sites. A thorough evaluation of the outcomes reveals the model's advantages and points up potential areas for development. By offering an automatic and trustworthy method for safety equipment detection, this research contributes to the fields of computer vision and workplace safety. The proposed deep learning-based approach will increase safety compliance and reduce the risk of accidents in the construction industry.Keywords: deep learning, safety equipment detection, YOLOv7, computer vision, workplace safety
Procedia PDF Downloads 707193 Design and Implementation of Control System in Underwater Glider of Ganeshblue
Authors: Imam Taufiqurrahman, Anugrah Adiwilaga, Egi Hidayat, Bambang Riyanto Trilaksono
Abstract:
Autonomous Underwater Vehicle glider is one of the renewal of underwater vehicles. This vehicle is one of the autonomous underwater vehicles that are being developed in Indonesia. Glide ability is obtained by controlling the buoyancy and attitude of the vehicle using the movers within the vehicle. The glider motion mechanism is expected to provide energy resistance from autonomous underwater vehicles so as to increase the cruising range of rides while performing missions. The control system on the vehicle consists of three parts: controlling the attitude of the pitch, the buoyancy engine controller and the yaw controller. The buoyancy and pitch controls on the vehicle are sequentially referring to the finite state machine with pitch angle and depth of diving inputs to obtain a gliding cycle. While the yaw control is done through the rudder for the needs of the guide system. This research is focused on design and implementation of control system of Autonomous Underwater Vehicle glider based on PID anti-windup. The control system is implemented on an ARM TS-7250-V2 device along with a mathematical model of the vehicle in MATLAB using the hardware-in-the-loop simulation (HILS) method. The TS-7250-V2 is chosen because it complies industry standards, has high computing capability, minimal power consumption. The results show that the control system in HILS process can form glide cycle with depth and angle of operation as desired. In the implementation using half control and full control mode, from the experiment can be concluded in full control mode more precision when tracking the reference. While half control mode is considered more efficient in carrying out the mission.Keywords: control system, PID, underwater glider, marine robotics
Procedia PDF Downloads 3767192 Design of Robust and Intelligent Controller for Active Removal of Space Debris
Authors: Shabadini Sampath, Jinglang Feng
Abstract:
With huge kinetic energy, space debris poses a major threat to astronauts’ space activities and spacecraft in orbit if a collision happens. The active removal of space debris is required in order to avoid frequent collisions that would occur. In addition, the amount of space debris will increase uncontrollably, posing a threat to the safety of the entire space system. But the safe and reliable removal of large-scale space debris has been a huge challenge to date. While capturing and deorbiting space debris, the space manipulator has to achieve high control precision. However, due to uncertainties and unknown disturbances, there is difficulty in coordinating the control of the space manipulator. To address this challenge, this paper focuses on developing a robust and intelligent control algorithm that controls joint movement and restricts it on the sliding manifold by reducing uncertainties. A neural network adaptive sliding mode controller (NNASMC) is applied with the objective of finding the control law such that the joint motions of the space manipulator follow the given trajectory. A computed torque control (CTC) is an effective motion control strategy that is used in this paper for computing space manipulator arm torque to generate the required motion. Based on the Lyapunov stability theorem, the proposed intelligent controller NNASMC and CTC guarantees the robustness and global asymptotic stability of the closed-loop control system. Finally, the controllers used in the paper are modeled and simulated using MATLAB Simulink. The results are presented to prove the effectiveness of the proposed controller approach.Keywords: GNC, active removal of space debris, AI controllers, MatLabSimulink
Procedia PDF Downloads 1347191 Algorithms for Run-Time Task Mapping in NoC-Based Heterogeneous MPSoCs
Authors: M. K. Benhaoua, A. K. Singh, A. E. Benyamina, P. Boulet
Abstract:
Mapping parallelized tasks of applications onto these MPSoCs can be done either at design time (static) or at run-time (dynamic). Static mapping strategies find the best placement of tasks at design-time, and hence, these are not suitable for dynamic workload and seem incapable of runtime resource management. The number of tasks or applications executing in MPSoC platform can exceed the available resources, requiring efficient run-time mapping strategies to meet these constraints. This paper describes a new Spiral Dynamic Task Mapping heuristic for mapping applications onto NoC-based Heterogeneous MPSoC. This heuristic is based on packing strategy and routing Algorithm proposed also in this paper. Heuristic try to map the tasks of an application in a clustering region to reduce the communication overhead between the communicating tasks. The heuristic proposed in this paper attempts to map the tasks of an application that are most related to each other in a spiral manner and to find the best possible path load that minimizes the communication overhead. In this context, we have realized a simulation environment for experimental evaluations to map applications with varying number of tasks onto an 8x8 NoC-based Heterogeneous MPSoCs platform, we demonstrate that the new mapping heuristics with the new modified dijkstra routing algorithm proposed are capable of reducing the total execution time and energy consumption of applications when compared to state-of-the-art run-time mapping heuristics reported in the literature.Keywords: multiprocessor system on chip, MPSoC, network on chip, NoC, heterogeneous architectures, run-time mapping heuristics, routing algorithm
Procedia PDF Downloads 4907190 Microwave Dielectric Properties and Microstructures of Nd(Ti₀.₅W₀.₅)O₄ Ceramics for Application in Wireless Gas Sensors
Authors: Yih-Chien Chen, Yue-Xuan Du, Min-Zhe Weng
Abstract:
Carbon monoxide is a substance produced by the incomplete combustion. It is toxic even at concentrations of less than 100ppm. Since it is colorless and odorless, it is difficult to detect. CO sensors have been developed using a variety of physical mechanisms, including semiconductor oxides, solid electrolytes, and organic semiconductors. Many works have focused on using semiconducting sensors composed of sensitive layers such as ZnO, TiO₂, and NiO with high sensitivity for gases. However, these sensors working at high temperatures increased their power consumption. On the other hand, the dielectric resonator (DR) is attractive for gas detection due to its large surface area and sensitivity for external environments. Materials that are to be employed in sensing devices must have a high-quality factor. Numerous researches into the fergusonite-type structure and related ceramic systems have explored. Extensive research into RENbO₄ ceramics has explored their potential application in resonators, filters, and antennas in modern communication systems, which are operated at microwave frequencies. Nd(Ti₀.₅W₀.₅)O₄ ceramics were synthesized herein using the conventional mixed-oxide method. The Nd(Ti₀.₅W₀.₅)O₄ ceramics were prepared using the conventional solid-state method. Dielectric constants (εᵣ) of 15.4-19.4 and quality factor (Q×f) of 3,600-11,100 GHz were obtained at sintering temperatures in the range 1425-1525°C for 4 h. The dielectric properties of the Nd(Ti₀.₅W₀.₅)O₄ ceramics at microwave frequencies were found to vary with the sintering temperature. For a further understanding of these microwave dielectric properties, they were analyzed by densification, X-ray diffraction (XRD), and by making microstructural observations.Keywords: dielectric constant, dielectric resonators, sensors, quality factor
Procedia PDF Downloads 2617189 Statistical Assessment of Models for Determination of Soil–Water Characteristic Curves of Sand Soils
Authors: S. J. Matlan, M. Mukhlisin, M. R. Taha
Abstract:
Characterization of the engineering behavior of unsaturated soil is dependent on the soil-water characteristic curve (SWCC), a graphical representation of the relationship between water content or degree of saturation and soil suction. A reasonable description of the SWCC is thus important for the accurate prediction of unsaturated soil parameters. The measurement procedures for determining the SWCC, however, are difficult, expensive, and time-consuming. During the past few decades, researchers have laid a major focus on developing empirical equations for predicting the SWCC, with a large number of empirical models suggested. One of the most crucial questions is how precisely existing equations can represent the SWCC. As different models have different ranges of capability, it is essential to evaluate the precision of the SWCC models used for each particular soil type for better SWCC estimation. It is expected that better estimation of SWCC would be achieved via a thorough statistical analysis of its distribution within a particular soil class. With this in view, a statistical analysis was conducted in order to evaluate the reliability of the SWCC prediction models against laboratory measurement. Optimization techniques were used to obtain the best-fit of the model parameters in four forms of SWCC equation, using laboratory data for relatively coarse-textured (i.e., sandy) soil. The four most prominent SWCCs were evaluated and computed for each sample. The result shows that the Brooks and Corey model is the most consistent in describing the SWCC for sand soil type. The Brooks and Corey model prediction also exhibit compatibility with samples ranging from low to high soil water content in which subjected to the samples that evaluated in this study.Keywords: soil-water characteristic curve (SWCC), statistical analysis, unsaturated soil, geotechnical engineering
Procedia PDF Downloads 3407188 A Deep Learning Approach to Online Social Network Account Compromisation
Authors: Edward K. Boahen, Brunel E. Bouya-Moko, Changda Wang
Abstract:
The major threat to online social network (OSN) users is account compromisation. Spammers now spread malicious messages by exploiting the trust relationship established between account owners and their friends. The challenge in detecting a compromised account by service providers is validating the trusted relationship established between the account owners, their friends, and the spammers. Another challenge is the increase in required human interaction with the feature selection. Research available on supervised learning (machine learning) has limitations with the feature selection and accounts that cannot be profiled, like application programming interface (API). Therefore, this paper discusses the various behaviours of the OSN users and the current approaches in detecting a compromised OSN account, emphasizing its limitations and challenges. We propose a deep learning approach that addresses and resolve the constraints faced by the previous schemes. We detailed our proposed optimized nonsymmetric deep auto-encoder (OPT_NDAE) for unsupervised feature learning, which reduces the required human interaction levels in the selection and extraction of features. We evaluated our proposed classifier using the NSL-KDD and KDDCUP'99 datasets in a graphical user interface enabled Weka application. The results obtained indicate that our proposed approach outperformed most of the traditional schemes in OSN compromised account detection with an accuracy rate of 99.86%.Keywords: computer security, network security, online social network, account compromisation
Procedia PDF Downloads 1217187 Social Imagination and History Teaching: Critical Thinking's Possibilities in the Australian Curriculum
Authors: Howard Prosser
Abstract:
This paper examines how critical thinking is framed, especially for primary-school students, in the recently established Australian Curriculum: History. Critical thinking is one of the curriculum’s 'general capabilities.' History provides numerous opportunities for critical thinking’s application in everyday life. The so-called 'history wars' that took place just prior to the curriculum’s introduction in 2014 sought to bring to light the limits of a singular historical narrative and reveal that which had been repressed. Consequently, the Australian history curriculum reflects this shifting mindset. Teachers are presented with opportunities to treat history in the classroom as a repository of social possibility, especially related to democratic potential, beyond hackneyed and jingoistic tales of Australian nationhood. Yet such opportunities are not explicit within the document and are up against pre-existing pedagogic practices. Drawing on political thinker Cornelius Castoriadis’s rendering of the 'social-historical' and 'paidea,' as well as his mobilisation of psychoanalysis, the study outlines how the curriculum’s critical-thinking component opens up possibilities for students and teachers to revise assumptions about how history is understood. This ontological shift is ultimately creative: the teachers’ imaginations connect the students’ imaginations, and vice versa, to the analysis that is at the heart of historical thinking. The implications of this social imagination add to the current discussions about historical consciousness among scholars like Peter Seixas. But, importantly, it has practical application in the primary-school classroom where history becomes creative acts, like play, that is indeterminate and social rather than fixed and individual.Keywords: Australia, Castoriadis, critical thinking, history, imagination
Procedia PDF Downloads 3077186 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning
Authors: Joseph George, Anne Kotteswara Roa
Abstract:
Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.Keywords: skin cancer, deep learning, performance measures, accuracy, datasets
Procedia PDF Downloads 1347185 Providing Reliability, Availability and Scalability Support for Quick Assist Technology Cryptography on the Cloud
Authors: Songwu Shen, Garrett Drysdale, Veerendranath Mannepalli, Qihua Dai, Yuan Wang, Yuli Chen, David Qian, Utkarsh Kakaiya
Abstract:
Hardware accelerator has been a promising solution to reduce the cost of cloud data centers. This paper investigates the QoS enhancement of the acceleration of an important datacenter workload: the webserver (or proxy) that faces high computational consumption originated from secure sockets layer (SSL) or transport layer security (TLS) procession in the cloud environment. Our study reveals that for the accelerator maintenance cases—need to upgrade driver/firmware or hardware reset due to hardware hang; we still can provide cryptography services by switching to software during maintenance phase and then switching back to accelerator after maintenance. The switching is seamless to server application such as Nginx that runs inside a VM on top of the server. To achieve this high availability goal, we propose a comprehensive fallback solution based on Intel® QuickAssist Technology (QAT). This approach introduces an architecture that involves the collaboration between physical function (PF) and virtual function (VF), and collaboration among VF, OpenSSL, and web application Nginx. The evaluation shows that our solution could provide high reliability, availability, and scalability (RAS) of hardware cryptography service in a 7x24x365 manner in the cloud environment.Keywords: accelerator, cryptography service, RAS, secure sockets layer/transport layer security, SSL/TLS, virtualization fallback architecture
Procedia PDF Downloads 1607184 Spatial Object-Oriented Template Matching Algorithm Using Normalized Cross-Correlation Criterion for Tracking Aerial Image Scene
Authors: Jigg Pelayo, Ricardo Villar
Abstract:
Leaning on the development of aerial laser scanning in the Philippine geospatial industry, researches about remote sensing and machine vision technology became a trend. Object detection via template matching is one of its application which characterized to be fast and in real time. The paper purposely attempts to provide application for robust pattern matching algorithm based on the normalized cross correlation (NCC) criterion function subjected in Object-based image analysis (OBIA) utilizing high-resolution aerial imagery and low density LiDAR data. The height information from laser scanning provides effective partitioning order, thus improving the hierarchal class feature pattern which allows to skip unnecessary calculation. Since detection is executed in the object-oriented platform, mathematical morphology and multi-level filter algorithms were established to effectively avoid the influence of noise, small distortion and fluctuating image saturation that affect the rate of recognition of features. Furthermore, the scheme is evaluated to recognized the performance in different situations and inspect the computational complexities of the algorithms. Its effectiveness is demonstrated in areas of Misamis Oriental province, achieving an overall accuracy of 91% above. Also, the garnered results portray the potential and efficiency of the implemented algorithm under different lighting conditions.Keywords: algorithm, LiDAR, object recognition, OBIA
Procedia PDF Downloads 2487183 Development and Validation of Selective Methods for Estimation of Valaciclovir in Pharmaceutical Dosage Form
Authors: Eman M. Morgan, Hayam M. Lotfy, Yasmin M. Fayez, Mohamed Abdelkawy, Engy Shokry
Abstract:
Two simple, selective, economic, safe, accurate, precise and environmentally friendly methods were developed and validated for the quantitative determination of valaciclovir (VAL) in the presence of its related substances R1 (acyclovir), R2 (guanine) in bulk powder and in the commercial pharmaceutical product containing the drug. Method A is a colorimetric method where VAL selectively reacts with ferric hydroxamate and the developed color was measured at 490 nm over a concentration range of 0.4-2 mg/mL with percentage recovery 100.05 ± 0.58 and correlation coefficient 0.9999. Method B is a reversed phase ultra performance liquid chromatographic technique (UPLC) which is considered superior in technology to the high-performance liquid chromatography with respect to speed, resolution, solvent consumption, time, and cost of analysis. Efficient separation was achieved on Agilent Zorbax CN column using ammonium acetate (0.1%) and acetonitrile as a mobile phase in a linear gradient program. Elution time for the separation was less than 5 min and ultraviolet detection was carried out at 256 nm over a concentration range of 2-50 μg/mL with mean percentage recovery 100.11±0.55 and correlation coefficient 0.9999. The proposed methods were fully validated as per International Conference on Harmonization specifications and effectively applied for the analysis of valaciclovir in pure form and tablets dosage form. Statistical comparison of the results obtained by the proposed and official or reported methods revealed no significant difference in the performance of these methods regarding the accuracy and precision respectively.Keywords: hydroxamic acid, related substances, UPLC, valaciclovir
Procedia PDF Downloads 2487182 Barriers and Opportunities for Implementing Electronic Prescription Software in Public Libyan Hospitals
Authors: Abdelbaset M. Elghriani, Abdelsalam M. Maatuk, Isam Denna, Amira Abdulla Werfalli
Abstract:
Electronic prescription software (e-prescribing) benefits patients and physicians by preventing handwriting errors and giving accurate prescriptions. E-prescribing allows prescriptions to be written and sent to pharmacies electronically instead of using handwritten notes. Significant factors that may affect the adoption of e-prescription systems include lacking technical support, financial resources to operate the systems, and change resistance from some clinicians, which have been identified as barriers to the implementation of e-prescription systems. This study aims to explore the trends and opinions of physicians and pharmacists about e-prescriptions and to identify the obstacles and benefits of the application of e-prescriptions in the health care system. A cross-sectional descriptive study was conducted at three Libyan public hospitals. Data were collected through a self-constructed questionnaire to assess the opinions regarding potential constraining factors and benefits of implementing an e-prescribing system in hospitals. Data presented as mean, frequency distribution table, cross-tabulation, and bar charts. Data analysis was performed, and the results show that technical, financial, and organizational obstacles are the most important obstacles that prevent the application of e-prescribing systems in Libyan hospitals. In addition, there was awareness of the benefits of e-prescribing, especially reducing medication dispensing errors, and a desire of physicians and pharmacists to use electronic prescriptions.Keywords: physicians, e-prescribing, health care system, pharmacists
Procedia PDF Downloads 1297181 Automated User Story Driven Approach for Web-Based Functional Testing
Authors: Mahawish Masud, Muhammad Iqbal, M. U. Khan, Farooque Azam
Abstract:
Manual writing of test cases from functional requirements is a time-consuming task. Such test cases are not only difficult to write but are also challenging to maintain. Test cases can be drawn from the functional requirements that are expressed in natural language. However, manual test case generation is inefficient and subject to errors. In this paper, we have presented a systematic procedure that could automatically derive test cases from user stories. The user stories are specified in a restricted natural language using a well-defined template. We have also presented a detailed methodology for writing our test ready user stories. Our tool “Test-o-Matic” automatically generates the test cases by processing the restricted user stories. The generated test cases are executed by using open source Selenium IDE. We evaluate our approach on a case study, which is an open source web based application. Effectiveness of our approach is evaluated by seeding faults in the open source case study using known mutation operators. Results show that the test case generation from restricted user stories is a viable approach for automated testing of web applications.Keywords: automated testing, natural language, restricted user story modeling, software engineering, software testing, test case specification, transformation and automation, user story, web application testing
Procedia PDF Downloads 3887180 Polymer Composites Containing Gold Nanoparticles for Biomedical Use
Authors: Bozena Tyliszczak, Anna Drabczyk, Sonia Kudlacik-Kramarczyk, Agnieszka Sobczak-Kupiec
Abstract:
Introduction: Nanomaterials become one of the leading materials in the synthesis of various compounds. This is a reason for the fact that nano-size materials exhibit other properties compared to their macroscopic equivalents. Such a change in size is reflected in a change in optical, electric or mechanical properties. Among nanomaterials, particular attention is currently directed into gold nanoparticles. They find application in a wide range of areas including cosmetology or pharmacy. Additionally, nanogold may be a component of modern wound dressings, which antibacterial activity is beneficial in the viewpoint of the wound healing process. Specific properties of this type of nanomaterials result in the fact that they may also be applied in cancer treatment. Studies on the development of new techniques of the delivery of drugs are currently an important research subject of many scientists. This is due to the fact that along with the development of such fields of science as medicine or pharmacy, the need for better and more effective methods of administering drugs is constantly growing. The solution may be the use of drug carriers. These are materials that combine with the active substance and lead it directly to the desired place. A role of such a carrier may be played by gold nanoparticles that are able to covalently bond with many organic substances. This allows the combination of nanoparticles with active substances. Therefore gold nanoparticles are widely used in the preparation of nanocomposites that may be used for medical purposes with special emphasis on drug delivery. Methodology: As part of the presented research, synthesis of composites was carried out. The mentioned composites consisted of the polymer matrix and gold nanoparticles that were introduced into the polymer network. The synthesis was conducted with the use of a crosslinking agent, and photoinitiator and the materials were obtained by means of the photopolymerization process. Next, incubation studies were conducted using selected liquids that simulated fluids are occurring in the human body. The study allows determining the biocompatibility of the tested composites in relation to selected environments. Next, the chemical structure of the composites was characterized as well as their sorption properties. Conclusions: Conducted research allowed for the preliminary characterization of prepared polymer composites containing gold nanoparticles in the viewpoint of their application for biomedical use. Tested materials were characterized by biocompatibility in tested environments. What is more, synthesized composites exhibited relatively high swelling capacity that is essential in the viewpoint of their potential application as drug carriers. During such an application, composite swells and at the same time releases from its interior introduced active substance; therefore, it is important to check the swelling ability of such material. Acknowledgements: The authors would like to thank The National Science Centre (Grant no: UMO - 2016/21/D/ST8/01697) for providing financial support to this project. This paper is based upon work from COST Action (CA18113), supported by COST (European Cooperation in Science and Technology).Keywords: nanocomposites, gold nanoparticles, drug carriers, swelling properties
Procedia PDF Downloads 1187179 A Practice of Zero Trust Architecture in Financial Transactions
Authors: Liwen Wang, Yuting Chen, Tong Wu, Shaolei Hu
Abstract:
In order to enhance the security of critical financial infrastructure, this study carries out a transformation of the architecture of a financial trading terminal to a zero trust architecture (ZTA), constructs an active defense system for cybersecurity, improves the security level of trading services in the Internet environment, enhances the ability to prevent network attacks and unknown risks, and reduces the industry and security risks brought about by cybersecurity risks. This study introduces the SDP technology of ZTA, adapts and applies it to a financial trading terminal to achieve security optimization and fine-grained business grading control. The upgraded architecture of the trading terminal moves security protection forward to the user access layer, replaces VPN to optimize remote access, and significantly improves the security protection capability of Internet transactions. The study achieves 1. deep integration with the access control architecture of the transaction system; 2. no impact on the performance of terminals and gateways, and no perception of application system upgrades; 3. customized checklist and policy configuration; 4. introduction of industry-leading security technology such as single-packet authorization (SPA) and secondary authentication. This study carries out a successful application of ZTA in the field of financial trading and provides transformation ideas for other similar systems while improving the security level of financial transaction services in the Internet environment.Keywords: zero trust, trading terminal, architecture, network security, cybersecurity
Procedia PDF Downloads 1737178 Quantifying and Prioritizing Agricultural Residue Biomass Energy Potential in Ethiopia
Authors: Angesom Gebrezgabiher Tesfay, Afafaw Hailesilasie Tesfay, Muyiwa Samuel Adaramola
Abstract:
The energy demand boost in Ethiopia urges sustainable fuel options while it is mainly supplemented by traditional biomass and imported conventional fuels. To satisfy the deficiency it has to be sourced from all renewables. Thus identifying resources and estimating potential is vital to the sector. This study aims at an in-depth assessment to quantify, prioritize, and analyze agricultural residue biomass energy and related characteristic forms. Biomass use management and modernization seeks successive information and a clue about the resource quantity and characteristic. Five years of crop yield data for thirteen crops were collected. Conversion factors for their 20 residues are surveyed from the literature. Then residues amount potentially available for energy and their energy is estimated regional, crop-wise, residue-wise, and shares compared. Their potential value for energy is analyzed from two perspectives and prioritized. The gross potential is estimated to be 495PJ, equivalent to 12/17 million tons of oil/coal. At 30% collection efficiency, it is the same as conventional fuel import in 2018. Maize and sorghum potential and spatial availability are preeminent. Cotton and maize presented the highest potential values for energy from application and resource perspectives. Oromia and Amhara regions' contributions are the highest. The resource collection and application trends are required for future management that implicates a prospective study.Keywords: crop residue, biomass potential, biomass resource, Ethiopian energy
Procedia PDF Downloads 1297177 Polyvinylidene Fluoride-Polyaniline Films for Improved Dielectric Properties
Authors: Anjana Jain, S. Jayanth Kumar
Abstract:
Polyvinylidene fluoride (PVDF) is a well-known material for remarkable mechanical properties, resistance to chemicals and superior ferroelectric performances. This endows PVDF the potential for application in supercapacitor devices. The dielectric properties of PVDF, however, are not very high. To improve the dielectric properties of Polyvinylidene fluoride (PVDF), Piezoelectric polymer nanocomposites are prepared without affecting the other useful properties of PVDF. Polyaniline (PANI) was chosen as a filler material to prepare the nanocomposites. PVDF-PANI nanocomposite films were prepared using solvent cast method with different volume fractions of PANI varying from 0.04% to 0.048% of PANI content. The films are characterized for structural, mechanical, and surface morphological properties using X-ray diffraction, differential scanning calorimeter, Raman spectra, Infrared spectra, tensile testing, and scanning electron microscopy. The X-ray diffraction analysis shows that, prepared films were in β-phase. The DSC scans indicated that the degree of crystallinity in PVDF-PANI is improved. Raman and Infrared spectrum further confirm the presence of β-phase of PVDF-PANI film. Tensile properties of PVDF-PANI films were in good agreement with those reported in literature. The surface feature shows that PANI is uniformly distributed in PVDF and also results in disappearance of spherulites. The influence of volume fraction of PANI in PVDF on dielectric properties was analyzed. The results showed that the dielectric permittivity of PVDF-PANI (120) was much higher than that of PVDF (12). The sensitivity of these films was studied on application of a pressure and a constant output voltage was obtained.Keywords: dielectric Properties, PANI, PVDF, smart materials
Procedia PDF Downloads 4427176 Vehicle Speed Estimation Using Image Processing
Authors: Prodipta Bhowmik, Poulami Saha, Preety Mehra, Yogesh Soni, Triloki Nath Jha
Abstract:
In India, the smart city concept is growing day by day. So, for smart city development, a better traffic management and monitoring system is a very important requirement. Nowadays, road accidents increase due to more vehicles on the road. Reckless driving is mainly responsible for a huge number of accidents. So, an efficient traffic management system is required for all kinds of roads to control the traffic speed. The speed limit varies from road to road basis. Previously, there was a radar system but due to high cost and less precision, the radar system is unable to become favorable in a traffic management system. Traffic management system faces different types of problems every day and it has become a researchable topic on how to solve this problem. This paper proposed a computer vision and machine learning-based automated system for multiple vehicle detection, tracking, and speed estimation of vehicles using image processing. Detection of vehicles and estimating their speed from a real-time video is tough work to do. The objective of this paper is to detect vehicles and estimate their speed as accurately as possible. So for this, a real-time video is first captured, then the frames are extracted from that video, then from that frames, the vehicles are detected, and thereafter, the tracking of vehicles starts, and finally, the speed of the moving vehicles is estimated. The goal of this method is to develop a cost-friendly system that can able to detect multiple types of vehicles at the same time.Keywords: OpenCV, Haar Cascade classifier, DLIB, YOLOV3, centroid tracker, vehicle detection, vehicle tracking, vehicle speed estimation, computer vision
Procedia PDF Downloads 867175 Beam Spatio-Temporal Multiplexing Approach for Improving Control Accuracy of High Contrast Pulse
Authors: Ping Li, Bing Feng, Junpu Zhao, Xudong Xie, Dangpeng Xu, Kuixing Zheng, Qihua Zhu, Xiaofeng Wei
Abstract:
In laser driven inertial confinement fusion (ICF), the control of the temporal shape of the laser pulse is a key point to ensure an optimal interaction of laser-target. One of the main difficulties in controlling the temporal shape is the foot part control accuracy of high contrast pulse. Based on the analysis of pulse perturbation in the process of amplification and frequency conversion in high power lasers, an approach of beam spatio-temporal multiplexing is proposed to improve the control precision of high contrast pulse. In the approach, the foot and peak part of high contrast pulse are controlled independently, which propagate separately in the near field, and combine together in the far field to form the required pulse shape. For high contrast pulse, the beam area ratio of the two parts is optimized, and then beam fluence and intensity of the foot part are increased, which brings great convenience to the control of pulse. Meanwhile, the near field distribution of the two parts is also carefully designed to make sure their F-numbers are the same, which is another important parameter for laser-target interaction. The integrated calculation results show that for a pulse with a contrast of up to 500, the deviation of foot part can be improved from 20% to 5% by using beam spatio-temporal multiplexing approach with beam area ratio of 1/20, which is almost the same as that of peak part. The research results are expected to bring a breakthrough in power balance of high power laser facility.Keywords: inertial confinement fusion, laser pulse control, beam spatio-temporal multiplexing, power balance
Procedia PDF Downloads 1507174 An Amended Method for Assessment of Hypertrophic Scars Viscoelastic Parameters
Authors: Iveta Bryjova
Abstract:
Recording of viscoelastic strain-vs-time curves with the aid of the suction method and a follow-up analysis, resulting into evaluation of standard viscoelastic parameters, is a significant technique for non-invasive contact diagnostics of mechanical properties of skin and assessment of its conditions, particularly in acute burns, hypertrophic scarring (the most common complication of burn trauma) and reconstructive surgery. For elimination of the skin thickness contribution, usable viscoelastic parameters deduced from the strain-vs-time curves are restricted to the relative ones (i.e. those expressed as a ratio of two dimensional parameters), like grosselasticity, net-elasticity, biological elasticity or Qu’s area parameters, in literature and practice conventionally referred to as R2, R5, R6, R7, Q1, Q2, and Q3. With the exception of parameters R2 and Q1, the remaining ones substantially depend on the position of inflection point separating the elastic linear and viscoelastic segments of the strain-vs-time curve. The standard algorithm implemented in commercially available devices relies heavily on the experimental fact that the inflection time comes about 0.1 sec after the suction switch-on/off, which depreciates credibility of parameters thus obtained. Although the Qu’s US 7,556,605 patent suggests a method of improving the precision of the inflection determination, there is still room for nonnegligible improving. In this contribution, a novel method of inflection point determination utilizing the advantageous properties of the Savitzky–Golay filtering is presented. The method allows computation of derivatives of smoothed strain-vs-time curve, more exact location of inflection and consequently more reliable values of aforementioned viscoelastic parameters. An improved applicability of the five inflection-dependent relative viscoelastic parameters is demonstrated by recasting a former study under the new method, and by comparing its results with those provided by the methods that have been used so far.Keywords: Savitzky–Golay filter, scarring, skin, viscoelasticity
Procedia PDF Downloads 3047173 Machine Learning Prediction of Compressive Damage and Energy Absorption in Carbon Fiber-Reinforced Polymer Tubular Structures
Authors: Milad Abbasi
Abstract:
Carbon fiber-reinforced polymer (CFRP) composite structures are increasingly being utilized in the automotive industry due to their lightweight and specific energy absorption capabilities. Although it is impossible to predict composite mechanical properties directly using theoretical methods, various research has been conducted so far in the literature for accurate simulation of CFRP structures' energy-absorbing behavior. In this research, axial compression experiments were carried out on hand lay-up unidirectional CFRP composite tubes. The fabrication method allowed the authors to extract the material properties of the CFRPs using ASTM D3039, D3410, and D3518 standards. A neural network machine learning algorithm was then utilized to build a robust prediction model to forecast the axial compressive properties of CFRP tubes while reducing high-cost experimental efforts. The predicted results have been compared with the experimental outcomes in terms of load-carrying capacity and energy absorption capability. The results showed high accuracy and precision in the prediction of the energy-absorption capacity of the CFRP tubes. This research also demonstrates the effectiveness and challenges of machine learning techniques in the robust simulation of composites' energy-absorption behavior. Interestingly, the proposed method considerably condensed numerical and experimental efforts in the simulation and calibration of CFRP composite tubes subjected to compressive loading.Keywords: CFRP composite tubes, energy absorption, crushing behavior, machine learning, neural network
Procedia PDF Downloads 1567172 Suppressing Vibration in a Three-axis Flexible Satellite: An Approach with Composite Control
Authors: Jalal Eddine Benmansour, Khouane Boulanoir, Nacera Bekhadda, Elhassen Benfriha
Abstract:
This paper introduces a novel composite control approach that addresses the challenge of stabilizing the three-axis attitude of a flexible satellite in the presence of vibrations caused by flexible appendages. The key contribution of this research lies in the development of a disturbance observer, which effectively observes and estimates the unwanted torques induced by the vibrations. By utilizing the estimated disturbance, the proposed approach enables efficient compensation for the detrimental effects of vibrations on the satellite system. To govern the attitude angles of the spacecraft, a proportional derivative controller (PD) is specifically designed and proposed. The PD controller ensures precise control over all attitude angles, facilitating stable and accurate spacecraft maneuvering. In order to demonstrate the global stability of the system, the Lyapunov method, a well-established technique in control theory, is employed. Through rigorous analysis, the Lyapunov method verifies the convergence of system dynamics, providing strong evidence of system stability. To evaluate the performance and efficacy of the proposed control algorithm, extensive simulations are conducted. The simulation results validate the effectiveness of the combined approach, showcasing significant improvements in the stabilization and control of the satellite's attitude, even in the presence of disruptive vibrations from flexible appendages. This novel composite control approach presented in this paper contributes to the advancement of satellite attitude control techniques, offering a promising solution for achieving enhanced stability and precision in challenging operational environments.Keywords: attitude control, flexible satellite, vibration control, disturbance observer
Procedia PDF Downloads 897171 Constraining the Potential Nickel Laterite Area Using Geographic Information System-Based Multi-Criteria Rating in Surigao Del Sur
Authors: Reiner-Ace P. Mateo, Vince Paolo F. Obille
Abstract:
The traditional method of classifying the potential mineral resources requires a significant amount of time and money. In this paper, an alternative way to classify potential mineral resources with GIS application in Surigao del Sur. The three (3) analog map data inputs integrated to GIS are geologic map, topographic map, and land cover/vegetation map. The indicators used in the classification of potential nickel laterite integrated from the analog map data inputs are a geologic indicator, which is the presence of ultramafic rock from the geologic map; slope indicator and the presence of plateau edges from the topographic map; areas of forest land, grassland, and shrublands from the land cover/vegetation map. The potential mineral of the area was classified from low up to very high potential. The produced mineral potential classification map of Surigao del Sur has an estimated 4.63% low nickel laterite potential, 42.15% medium nickel laterite potential, 43.34% high nickel laterite potential, and 9.88% very high nickel laterite from its ultramafic terrains. For the validation of the produced map, it was compared with known occurrences of nickel laterite in the area using a nickel mining tenement map from the area with the application of remote sensing. Three (3) prominent nickel mining companies were delineated in the study area. The generated potential classification map of nickel-laterite in Surigao Del Sur may be of aid to the mining companies which are currently in the exploration phase in the study area. Also, the currently operating nickel mines in the study area can help to validate the reliability of the mineral classification map produced.Keywords: mineral potential classification, nickel laterites, GIS, remote sensing, Surigao del Sur
Procedia PDF Downloads 1267170 Unveiling Irregular Migration: An Evaluation of Airport Interventions and Geographic Trends in Sri Lanka
Authors: Abewardhana Arachchi Bandula Dimuthu Priyadarshana Abewardhana, Rasika Nirosh Gonapinuwala Vithanage, Karawe Thanthreege Amila Madusanka Perera, Asanka Sanjeewa Karunarathne, Navullage Mayuri Radhika Perera
Abstract:
The phenomenon of irregular migration and human trafficking presents multifaceted challenges to Sri Lanka, with specific focus on the migration routes to the United Arab Emirates (UAE), the Sultanate of Oman, and Malaysia. This research critically assesses the efficacy of a pilot project instituted at Bandaranaike International Airport aimed at the identification and deterrence of potential irregular migrants. Additionally, the study conducts a nuanced analysis of the geographical tendencies pertaining to passengers who revise their migration intentions at the airport. Pertinently, the findings indicate that Colombo and Gampaha Districts emerge as the most susceptible to human trafficking, with Galle, Nuwaraeliya, Rathnapura, and Polonnaruwa Districts following as areas of elevated concern, particularly within the framework of the 'visit visa' scenario. These insights emanate from an extensive data collection period spanning 50 days of the pilot project, encompassing 1,479 passengers, of which 46 returnees reported to the Safe Migration Promotion Unit. The research is founded on the twin objectives of comprehending the motivations of passengers and evaluating the effectiveness of interventions, with a view to devising precision-targeted prevention strategies. Through this endeavor, the study actively contributes to the safeguarding of the rights and welfare of migrants, significantly advancing the ongoing battle against irregular migration.Keywords: irregular migration, human trafficking, airport interventions, geographic trends
Procedia PDF Downloads 877169 Evaluation of Strategies to Mitigate the Carbon Emissions from MSW: A Case Study
Authors: N. Anusree, P. Sughosh, G. L. Sivakumar Babu
Abstract:
Municipalities throughout the world are marred with serious issues related to the Municipal Solid Waste (MSW) collection, treatment, and safe disposal. While the Waste Management sector contributes around 3-9 % of the overall anthropogenic methane emission, measures towards mitigating these emissions are rarely given attention in developing countries. In the case of Bangalore, India, around 5680 tons of MSW is generated in a day, and its collection and treatment efficiency are around 90-95 % and 26.4 %, respectively. About 33.4 % of the waste collected is directly landfilled without any treatment, further aggravating the situation. The potential of reducing the emissions emanating from the MSW of Bangalore city without any severe consequences on the current MSW management practices is evaluated in this study. Three emission scenarios consisting of the baseline condition (current practices – Case-1), the application of biocovers for methane oxidation in the dumpsites (case-2), and the diversion of Organic Fraction of MSW (OFMSW) along with the application of biocovers (case-3) are evaluated and compared with each other. The emissions are calculated based on the aerobic and anaerobic stochiometric relations for the three scenarios. Laboratory scale column studies are carried out to determine the methane oxidation potential of three different biocover material (digested MBT (mechanically biologically treated) waste, Fresh MBT waste, and charcoal amended with fresh MBT waste). The results shown that around 40 % and 83 % reduction in carbon emissions can be achieved in case 3 and 2 in comparison to the baseline condition. The study clearly shows that with minor changes in the waste management practices, substantial reductions in the carbon emissions can be attained in Bangalore City.Keywords: MSW, biocover, composting, carbon emission
Procedia PDF Downloads 1337168 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory
Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker
Abstract:
In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.Keywords: chemical analysis, concrete, LIBS, spectroscopy
Procedia PDF Downloads 106