Search results for: whale optimization algorithm
544 Optimization Approach to Integrated Production-Inventory-Routing Problem for Oxygen Supply Chains
Authors: Yena Lee, Vassilis M. Charitopoulos, Karthik Thyagarajan, Ian Morris, Jose M. Pinto, Lazaros G. Papageorgiou
Abstract:
With globalisation, the need to have better coordination of production and distribution decisions has become increasingly important for industrial gas companies in order to remain competitive in the marketplace. In this work, we investigate a problem that integrates production, inventory, and routing decisions in a liquid oxygen supply chain. The oxygen supply chain consists of production facilities, external third-party suppliers, and multiple customers, including hospitals and industrial customers. The product produced by the plants or sourced from the competitors, i.e., third-party suppliers, is distributed by a fleet of heterogenous vehicles to satisfy customer demands. The objective is to minimise the total operating cost involving production, third-party, and transportation costs. The key decisions for production include production and inventory levels and product amount from third-party suppliers. In contrast, the distribution decisions involve customer allocation, delivery timing, delivery amount, and vehicle routing. The optimisation of the coordinated production, inventory, and routing decisions is a challenging problem, especially when dealing with large-size problems. Thus, we present a two-stage procedure to solve the integrated problem efficiently. First, the problem is formulated as a mixed-integer linear programming (MILP) model by simplifying the routing component. The solution from the first-stage MILP model yields the optimal customer allocation, production and inventory levels, and delivery timing and amount. Then, we fix the previous decisions and solve a detailed routing. In the second stage, we propose a column generation scheme to address the computational complexity of the resulting detailed routing problem. A case study considering a real-life oxygen supply chain in the UK is presented to illustrate the capability of the proposed models and solution method. Furthermore, a comparison of the solutions from the proposed approach with the corresponding solutions provided by existing metaheuristic techniques (e.g., guided local search and tabu search algorithms) is presented to evaluate the efficiency.Keywords: production planning, inventory routing, column generation, mixed-integer linear programming
Procedia PDF Downloads 112543 Location Uncertainty – A Probablistic Solution for Automatic Train Control
Authors: Monish Sengupta, Benjamin Heydecker, Daniel Woodland
Abstract:
New train control systems rely mainly on Automatic Train Protection (ATP) and Automatic Train Operation (ATO) dynamically to control the speed and hence performance. The ATP and the ATO form the vital element within the CBTC (Communication Based Train Control) and within the ERTMS (European Rail Traffic Management System) system architectures. Reliable and accurate measurement of train location, speed and acceleration are vital to the operation of train control systems. In the past, all CBTC and ERTMS system have deployed a balise or equivalent to correct the uncertainty element of the train location. Typically a CBTC train is allowed to miss only one balise on the track, after which the Automatic Train Protection (ATP) system applies emergency brake to halt the service. This is because the location uncertainty, which grows within the train control system, cannot tolerate missing more than one balise. Balises contribute a significant amount towards wayside maintenance and studies have shown that balises on the track also forms a constraint for future track layout change and change in speed profile.This paper investigates the causes of the location uncertainty that is currently experienced and considers whether it is possible to identify an effective filter to ascertain, in conjunction with appropriate sensors, more accurate speed, distance and location for a CBTC driven train without the need of any external balises. An appropriate sensor fusion algorithm and intelligent sensor selection methodology will be deployed to ascertain the railway location and speed measurement at its highest precision. Similar techniques are already in use in aviation, satellite, submarine and other navigation systems. Developing a model for the speed control and the use of Kalman filter is a key element in this research. This paper will summarize the research undertaken and its significant findings, highlighting the potential for introducing alternative approaches to train positioning that would enable removal of all trackside location correction balises, leading to huge reduction in maintenances and more flexibility in future track design.Keywords: ERTMS, CBTC, ATP, ATO
Procedia PDF Downloads 410542 Optimization of Chitosan Membrane Production Parameters for Zinc Ion Adsorption
Authors: Peter O. Osifo, Hein W. J. P. Neomagus, Hein V. D. Merwe
Abstract:
Chitosan materials from different sources of raw materials were characterized in order to determine optimal preparation conditions and parameters for membrane production. The membrane parameters such as molecular weight, viscosity, and degree of deacetylation were used to evaluate the membrane performance for zinc ion adsorption. The molecular weight of the chitosan was found to influence the viscosity of the chitosan/acetic acid solution. An increase in molecular weight (60000-400000 kg.kmol-1) of the chitosan resulted in a higher viscosity (0.05-0.65 Pa.s) of the chitosan/acetic acid solution. The effect of the degree of deacetylation on the viscosity is not significant. The effect of the membrane production parameters (chitosan- and acetic acid concentration) on the viscosity is mainly determined by the chitosan concentration. For higher chitosan concentrations, a membrane with a better adsorption capacity was obtained. The membrane adsorption capacity increases from 20-130 mg Zn per gram of wet membrane for an increase in chitosan concentration from 2-7 mass %. Chitosan concentrations below 2 and above 7.5 mass % produced membranes that lack good mechanical properties. The optimum manufacturing conditions including chitosan concentration, acetic acid concentration, sodium hydroxide concentration and crosslinking for chitosan membranes within the workable range were defined by the criteria of adsorption capacity and flux. The adsorption increases (50-120 mg.g-1) as the acetic acid concentration increases (1-7 mass %). The sodium hydroxide concentration seems not to have a large effect on the adsorption characteristics of the membrane however, a maximum was reached at a concentration of 5 mass %. The adsorption capacity per gram of wet membrane strongly increases with the chitosan concentration in the acetic acid solution but remains constant per gram of dry chitosan. The optimum solution for membrane production consists of 7 mass % chitosan and 4 mass % acetic acid in de-ionised water. The sodium hydroxide concentration for phase inversion is at optimum at 5 mass %. The optimum cross-linking time was determined to be 6 hours (Percentage crosslinking of 18%). As the cross-linking time increases the adsorption of the zinc decreases (150-50 mg.g-1) in the time range of 0 to 12 hours. After a crosslinking time of 12 hours, the adsorption capacity remains constant. This trend is comparable to the effect on flux through the membrane. The flux decreases (10-3 L.m-2.hr-1) with an increase in crosslinking time range of 0 to 12 hours and reaches a constant minimum after 12 hours.Keywords: chitosan, membrane, waste water, heavy metal ions, adsorption
Procedia PDF Downloads 387541 Prevalence of Pretreatment Drug HIV-1 Mutations in Moscow, Russia
Authors: Daria Zabolotnaya, Svetlana Degtyareva, Veronika Kanestri, Danila Konnov
Abstract:
An adequate choice of the initial antiretroviral treatment determines the treatment efficacy. In the clinical guidelines in Russia non-nucleoside reverse transcriptase inhibitors (NNRTIs) are still considered to be an option for first-line treatment while pretreatment drug resistance (PDR) testing is not routinely performed. We conducted a cohort retrospective study in HIV-positive treatment naïve patients of the H-clinic (Moscow, Russia) who performed PDR testing from July 2017 to November 2021. All the information was obtained from the medical records anonymously. We analyzed the mutations in reverse transcriptase and protease genes. RT-sequences were obtained by AmpliSens HIV-Resist-Seq kit. Drug resistance was defined using the HIVdb Program v. 8.9-1. PDR was estimated using the Stanford algorithm. Descriptive statistics were performed in Excel (Microsoft Office, 2019). A total of 261 HIV-1 infected patients were enrolled in the study including 197 (75.5%) male and 64 (24.5%) female. The mean age was 34.6±8.3 years. The median CD4 count – 521 cells/µl (IQR 367-687 cells/µl). Data on risk factors of HIV-infection were scarce. The total quantity of strains containing mutations in the reverse transcriptase gene was 75 (28.7%). From these 5 (1.9%) mutations were associated with PDR to nucleoside reverse transcriptase inhibitors (NRTIs) and 30 (11.5%) – with PDR to NNRTIs. The number of strains with mutations in protease gene was 43 (16.5%), from these only 3 (1.1%) mutations were associated with resistance to protease inhibitors. For NNRTIs the most prevalent PDR mutations were E138A, V106I. Most of the HIV variants exhibited a single PDR mutation, 2 were found in 3 samples. Most of HIV variants with PDR mutation displayed a single drug class resistance mutation. 2/37 (5.4%) strains had both NRTIs and NNRTIs mutations. There were no strains identified with PDR mutations to all three drug classes. Though earlier data demonstrated a lower level of PDR in HIV treatment naïve population in Russia and our cohort can be not fully representative as it is taken from the private clinic, it reflects the trend of increasing PDR especially to NNRTIs. Therefore, we consider either pretreatment testing or giving the priority to other drugs as first-line treatment necessary.Keywords: HIV, resistance, mutations, treatment
Procedia PDF Downloads 94540 A Real Time Set Up for Retrieval of Emotional States from Human Neural Responses
Authors: Rashima Mahajan, Dipali Bansal, Shweta Singh
Abstract:
Real time non-invasive Brain Computer Interfaces have a significant progressive role in restoring or maintaining a quality life for medically challenged people. This manuscript provides a comprehensive review of emerging research in the field of cognitive/affective computing in context of human neural responses. The perspectives of different emotion assessment modalities like face expressions, speech, text, gestures, and human physiological responses have also been discussed. Focus has been paid to explore the ability of EEG (Electroencephalogram) signals to portray thoughts, feelings, and unspoken words. An automated workflow-based protocol to design an EEG-based real time Brain Computer Interface system for analysis and classification of human emotions elicited by external audio/visual stimuli has been proposed. The front end hardware includes a cost effective and portable Emotive EEG Neuroheadset unit, a personal computer and a set of external stimulators. Primary signal analysis and processing of real time acquired EEG shall be performed using MATLAB based advanced brain mapping toolbox EEGLab/BCILab. This shall be followed by the development of MATLAB based self-defined algorithm to capture and characterize temporal and spectral variations in EEG under emotional stimulations. The extracted hybrid feature set shall be used to classify emotional states using artificial intelligence tools like Artificial Neural Network. The final system would result in an inexpensive, portable and more intuitive Brain Computer Interface in real time scenario to control prosthetic devices by translating different brain states into operative control signals.Keywords: brain computer interface, electroencephalogram, EEGLab, BCILab, emotive, emotions, interval features, spectral features, artificial neural network, control applications
Procedia PDF Downloads 317539 DNA Methylation Score Development for In utero Exposure to Paternal Smoking Using a Supervised Machine Learning Approach
Authors: Cristy Stagnar, Nina Hubig, Diana Ivankovic
Abstract:
The epigenome is a compelling candidate for mediating long-term responses to environmental effects modifying disease risk. The main goal of this research is to develop a machine learning-based DNA methylation score, which will be valuable in delineating the unique contribution of paternal epigenetic modifications to the germline impacting childhood health outcomes. It will also be a useful tool in validating self-reports of nonsmoking and in adjusting epigenome-wide DNA methylation association studies for this early-life exposure. Using secondary data from two population-based methylation profiling studies, our DNA methylation score is based on CpG DNA methylation measurements from cord blood gathered from children whose fathers smoked pre- and peri-conceptually. Each child’s mother and father fell into one of three class labels in the accompanying questionnaires -never smoker, former smoker, or current smoker. By applying different machine learning algorithms to the accessible resource for integrated epigenomic studies (ARIES) sub-study of the Avon longitudinal study of parents and children (ALSPAC) data set, which we used for training and testing of our model, the best-performing algorithm for classifying the father smoker and mother never smoker was selected based on Cohen’s κ. Error in the model was identified and optimized. The final DNA methylation score was further tested and validated in an independent data set. This resulted in a linear combination of methylation values of selected probes via a logistic link function that accurately classified each group and contributed the most towards classification. The result is a unique, robust DNA methylation score which combines information on DNA methylation and early life exposure of offspring to paternal smoking during pregnancy and which may be used to examine the paternal contribution to offspring health outcomes.Keywords: epigenome, health outcomes, paternal preconception environmental exposures, supervised machine learning
Procedia PDF Downloads 185538 Bionaut™: A Minimally Invasive Microsurgical Platform to Treat Non-Communicating Hydrocephalus in Dandy-Walker Malformation
Authors: Suehyun Cho, Darrell Harrington, Florent Cros, Olin Palmer, John Caputo, Michael Kardosh, Eran Oren, William Loudon, Alex Kiselyov, Michael Shpigelmacher
Abstract:
The Dandy-Walker malformation (DWM) represents a clinical syndrome manifesting as a combination of posterior fossa cyst, hypoplasia of the cerebellar vermis, and obstructive hydrocephalus. Anatomic hallmarks include hypoplasia of the cerebellar vermis, enlargement of the posterior fossa, and cystic dilatation of the fourth ventricle. Current treatments of DWM, including shunting of the cerebral spinal fluid ventricular system and endoscopic third ventriculostomy (ETV), are frequently clinically insufficient, require additional surgical interventions, and carry risks of infections and neurological deficits. Bionaut Labs develops an alternative way to treat Dandy-Walker Malformation (DWM) associated with non-communicating hydrocephalus. We utilize our discreet microsurgical Bionaut™ particles that are controlled externally and remotely to perform safe, accurate, effective fenestration of the Dandy-Walker cyst, specifically in the posterior fossa of the brain, to directly normalize intracranial pressure. Bionaut™ allows for complex non-linear trajectories not feasible by any conventional surgical techniques. The microsurgical particle safely reaches targets in the lower occipital section of the brain. Bionaut™ offers a minimally invasive surgical alternative to highly involved posterior craniotomy or shunts via direct fenestration of the fourth ventricular cyst at the locus defined by the individual anatomy. Our approach offers significant advantages over the current standards of care in patients exhibiting anatomical challenge(s) as a manifestation of DWM, and therefore, is intended to replace conventional therapeutic strategies. Current progress, including platform optimization, Bionaut™ control, and real-time imaging and in vivo safety studies of the Bionauts™ in large animals, specifically the spine and the brain of ovine models, will be discussed.Keywords: Bionaut™, cerebral spinal fluid, CSF, cyst, Dandy-Walker, fenestration, hydrocephalus, micro-robot
Procedia PDF Downloads 221537 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Primary Distant Metastases Growth
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
Finding algorithms to predict the growth of tumors has piqued the interest of researchers ever since the early days of cancer research. A number of studies were carried out as an attempt to obtain reliable data on the natural history of breast cancer growth. Mathematical modeling can play a very important role in the prognosis of tumor process of breast cancer. However, mathematical models describe primary tumor growth and metastases growth separately. Consequently, we propose a mathematical growth model for primary tumor and primary metastases which may help to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoM-IV and corresponding software. We are interested in: 1) modelling the whole natural history of primary tumor and primary metastases; 2) developing adequate and precise CoM-IV which reflects relations between PT and MTS; 3) analyzing the CoM-IV scope of application; 4) implementing the model as a software tool. The CoM-IV is based on exponential tumor growth model and consists of a system of determinate nonlinear and linear equations; corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and primary metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for primary metastases; 3) ‘visible period’ for primary metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-IV model and predictive software: a) detect different growth periods of primary tumor and primary metastases; b) make forecast of the period of primary metastases appearance; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of BC and facilitate optimization of diagnostic tests. The following are calculated by CoM-IV: the number of doublings for ‘nonvisible’ and ‘visible’ growth period of primary metastases; tumor volume doubling time (days) for ‘nonvisible’ and ‘visible’ growth period of primary metastases. The CoM-IV enables, for the first time, to predict the whole natural history of primary tumor and primary metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-IV describes correctly primary tumor and primary distant metastases growth of IV (T1-4N0-3M1) stage with (N1-3) or without regional metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and manifestation of primary metastases.Keywords: breast cancer, exponential growth model, mathematical modelling, primary metastases, primary tumor, survival
Procedia PDF Downloads 335536 Power Performance Improvement of 500W Vertical Axis Wind Turbine with Salient Design Parameters
Authors: Young-Tae Lee, Hee-Chang Lim
Abstract:
This paper presents the performance characteristics of Darrieus-type vertical axis wind turbine (VAWT) with NACA airfoil blades. The performance of Darrieus-type VAWT can be characterized by torque and power. There are various parameters affecting the performance such as chord length, helical angle, pitch angle and rotor diameter. To estimate the optimum shape of Darrieustype wind turbine in accordance with various design parameters, we examined aerodynamic characteristics and separated flow occurring in the vicinity of blade, interaction between flow and blade, and torque and power characteristics derived from it. For flow analysis, flow variations were investigated based on the unsteady RANS (Reynolds-averaged Navier-Stokes) equation. Sliding mesh algorithm was employed in order to consider rotational effect of blade. To obtain more realistic results we conducted experiment and numerical analysis at the same time for three-dimensional shape. In addition, several parameters (chord length, rotor diameter, pitch angle, and helical angle) were considered to find out optimum shape design and characteristics of interaction with ambient flow. Since the NACA airfoil used in this study showed significant changes in magnitude of lift and drag depending on an angle of attack, the rotor with low drag, long cord length and short diameter shows high power coefficient in low tip speed ratio (TSR) range. On the contrary, in high TSR range, drag becomes high. Hence, the short-chord and long-diameter rotor produces high power coefficient. When a pitch angle at which airfoil directs toward inside equals to -2° and helical angle equals to 0°, Darrieus-type VAWT generates maximum power.Keywords: darrieus wind turbine, VAWT, NACA airfoil, performance
Procedia PDF Downloads 373535 An Approach to Autonomous Drones Using Deep Reinforcement Learning and Object Detection
Authors: K. R. Roopesh Bharatwaj, Avinash Maharana, Favour Tobi Aborisade, Roger Young
Abstract:
Presently, there are few cases of complete automation of drones and its allied intelligence capabilities. In essence, the potential of the drone has not yet been fully utilized. This paper presents feasible methods to build an intelligent drone with smart capabilities such as self-driving, and obstacle avoidance. It does this through advanced Reinforcement Learning Techniques and performs object detection using latest advanced algorithms, which are capable of processing light weight models with fast training in real time instances. For the scope of this paper, after researching on the various algorithms and comparing them, we finally implemented the Deep-Q-Networks (DQN) algorithm in the AirSim Simulator. In future works, we plan to implement further advanced self-driving and object detection algorithms, we also plan to implement voice-based speech recognition for the entire drone operation which would provide an option of speech communication between users (People) and the drone in the time of unavoidable circumstances. Thus, making drones an interactive intelligent Robotic Voice Enabled Service Assistant. This proposed drone has a wide scope of usability and is applicable in scenarios such as Disaster management, Air Transport of essentials, Agriculture, Manufacturing, Monitoring people movements in public area, and Defense. Also discussed, is the entire drone communication based on the satellite broadband Internet technology for faster computation and seamless communication service for uninterrupted network during disasters and remote location operations. This paper will explain the feasible algorithms required to go about achieving this goal and is more of a reference paper for future researchers going down this path.Keywords: convolution neural network, natural language processing, obstacle avoidance, satellite broadband technology, self-driving
Procedia PDF Downloads 251534 A Comprehensive Methodology for Voice Segmentation of Large Sets of Speech Files Recorded in Naturalistic Environments
Authors: Ana Londral, Burcu Demiray, Marcus Cheetham
Abstract:
Speech recording is a methodology used in many different studies related to cognitive and behaviour research. Modern advances in digital equipment brought the possibility of continuously recording hours of speech in naturalistic environments and building rich sets of sound files. Speech analysis can then extract from these files multiple features for different scopes of research in Language and Communication. However, tools for analysing a large set of sound files and automatically extract relevant features from these files are often inaccessible to researchers that are not familiar with programming languages. Manual analysis is a common alternative, with a high time and efficiency cost. In the analysis of long sound files, the first step is the voice segmentation, i.e. to detect and label segments containing speech. We present a comprehensive methodology aiming to support researchers on voice segmentation, as the first step for data analysis of a big set of sound files. Praat, an open source software, is suggested as a tool to run a voice detection algorithm, label segments and files and extract other quantitative features on a structure of folders containing a large number of sound files. We present the validation of our methodology with a set of 5000 sound files that were collected in the daily life of a group of voluntary participants with age over 65. A smartphone device was used to collect sound using the Electronically Activated Recorder (EAR): an app programmed to record 30-second sound samples that were randomly distributed throughout the day. Results demonstrated that automatic segmentation and labelling of files containing speech segments was 74% faster when compared to a manual analysis performed with two independent coders. Furthermore, the methodology presented allows manual adjustments of voiced segments with visualisation of the sound signal and the automatic extraction of quantitative information on speech. In conclusion, we propose a comprehensive methodology for voice segmentation, to be used by researchers that have to work with large sets of sound files and are not familiar with programming tools.Keywords: automatic speech analysis, behavior analysis, naturalistic environments, voice segmentation
Procedia PDF Downloads 282533 Digital Architectural Practice as a Challenge for Digital Architectural Technology Elements in the Era of Digital Design
Authors: Ling Liyun
Abstract:
In the field of contemporary architecture, complex forms of architectural works continue to emerge in the world, along with some new terminology emerged: digital architecture, parametric design, algorithm generation, building information modeling, CNC construction and so on. Architects gradually mastered the new skills of mathematical logic in the form of exploration, virtual simulation, and the entire design and coordination in the construction process. Digital construction technology has a greater degree in controlling construction, and ensure its accuracy, creating a series of new construction techniques. As a result, the use of digital technology is an improvement and expansion of the practice of digital architecture design revolution. We worked by reading and analyzing information about the digital architecture development process, a large number of cases, as well as architectural design and construction as a whole process. Thus current developments were introduced and discussed in our paper, such as architectural discourse, design theory, digital design models and techniques, material selecting, as well as artificial intelligence space design. Our paper also pays attention to the representative three cases of digital design and construction experiment at great length in detail to expound high-informatization, high-reliability intelligence, and high-technique in constructing a humane space to cope with the rapid development of urbanization. We concluded that the opportunities and challenges of the shift existed in architectural paradigms, such as the cooperation methods, theories, models, technologies and techniques which were currently employed in digital design research and digital praxis. We also find out that the innovative use of space can gradually change the way people learn, talk, and control information. The past two decades, digital technology radically breaks the technology constraints of industrial technical products, digests the publicity on a particular architectural style (era doctrine). People should not adapt to the machine, but in turn, it’s better to make the machine work for users.Keywords: artificial intelligence, collaboration, digital architecture, digital design theory, material selection, space construction
Procedia PDF Downloads 136532 Population Pharmacokinetics of Levofloxacin and Moxifloxacin, and the Probability of Target Attainment in Ethiopian Patients with Multi-Drug Resistant Tuberculosis
Authors: Temesgen Sidamo, Prakruti S. Rao, Eleni Akllilu, Workineh Shibeshi, Yumi Park, Yong-Soon Cho, Jae-Gook Shin, Scott K. Heysell, Stellah G. Mpagama, Ephrem Engidawork
Abstract:
The fluoroquinolones (FQs) are used off-label for the treatment of multidrug-resistant tuberculosis (MDR-TB), and for evaluation in shortening the duration of drug-susceptible TB in recently prioritized regimens. Within the class, levofloxacin (LFX) and moxifloxacin (MXF) play a substantial role in ensuring success in treatment outcomes. However, sub-therapeutic plasma concentrations of either LFX or MXF may drive unfavorable treatment outcomes. To the best of our knowledge, the pharmacokinetics of LFX and MXF in Ethiopian patients with MDR-TB have not yet been investigated. Therefore, the aim of this study was to develop a population pharmacokinetic (PopPK) model of levofloxacin (LFX) and moxifloxacin (MXF) and assess the percent probability of target attainment (PTA) as defined by the ratio of the area under the plasma concentration-time curve over 24-h (AUC0-24) and the in vitro minimum inhibitory concentration (MIC) (AUC0-24/MIC) in Ethiopian MDR-TB patients. Steady-state plasma was collected from 39 MDR-TB patients enrolled in the programmatic treatment course and the drug concentrations were determined using optimized liquid chromatography-tandem mass spectrometry. In addition, the in vitro MIC of the patients' pretreatment clinical isolates was determined. PopPK and simulations were run at various doses, and PK parameters were estimated. The effect of covariates on the PK parameters and the PTA for maximum mycobacterial kill and resistance prevention was also investigated. LFX and MXF both fit in a one-compartment model with adjustments. The apparent volume of distribution (V) and clearance (CL) of LFX were influenced by serum creatinine (Scr), whereas the absorption constant (Ka) and V of MXF were influenced by Scr and BMI, respectively. The PTA for LFX maximal mycobacterial kill at the critical MIC of 0.5 mg/L was 29%, 62%, and 95% with the simulated 750 mg, 1000 mg, and 1500 mg doses, respectively, whereas the PTA for resistance prevention at 1500 mg was only 4.8%, with none of the lower doses achieving this target. At the critical MIC of 0.25 mg/L, there was no difference in the PTA (94.4%) for maximum bacterial kill among the simulated doses of MXF (600 mg, 800 mg, and 1000 mg), but the PTA for resistance prevention improved proportionately with dose. Standard LFX and MXF doses may not provide adequate drug exposure. LFX PopPK is more predictable for maximum mycobacterial kill, whereas MXF's resistance prevention target increases with dose. Scr and BMI are likely to be important covariates in dose optimization or therapeutic drug monitoring (TDM) studies in Ethiopian patients.Keywords: population PK, PTA, moxifloxacin, levofloxacin, MDR-TB patients, ethiopia
Procedia PDF Downloads 120531 A Multi-Criteria Decision Method for the Recruitment of Academic Personnel Based on the Analytical Hierarchy Process and the Delphi Method in a Neutrosophic Environment
Authors: Antonios Paraskevas, Michael Madas
Abstract:
For a university to maintain its international competitiveness in education, it is essential to recruit qualitative academic staff as it constitutes its most valuable asset. This selection demonstrates a significant role in achieving strategic objectives, particularly by emphasizing a firm commitment to the exceptional student experience and innovative teaching and learning practices of high quality. In this vein, the appropriate selection of academic staff establishes a very important factor of competitiveness, efficiency and reputation of an academic institute. Within this framework, our work demonstrates a comprehensive methodological concept that emphasizes the multi-criteria nature of the problem and how decision-makers could utilize our approach in order to proceed to the appropriate judgment. The conceptual framework introduced in this paper is built upon a hybrid neutrosophic method based on the Neutrosophic Analytical Hierarchy Process (N-AHP), which uses the theory of neutrosophy sets and is considered suitable in terms of a significant degree of ambiguity and indeterminacy observed in the decision-making process. To this end, our framework extends the N-AHP by incorporating the Neutrosophic Delphi Method (N-DM). By applying the N-DM, we can take into consideration the importance of each decision-maker and their preferences per evaluation criterion. To the best of our knowledge, the proposed model is the first which applies the Neutrosophic Delphi Method in the selection of academic staff. As a case study, it was decided to use our method for a real problem of academic personnel selection, having as the main goal to enhance the algorithm proposed in previous scholars’ work, and thus taking care of the inherent ineffectiveness which becomes apparent in traditional multi-criteria decision-making methods when dealing with situations alike. As a further result, we prove that our method demonstrates greater applicability and reliability when compared to other decision models.Keywords: multi-criteria decision making methods, analytical hierarchy process, delphi method, personnel recruitment, neutrosophic set theory
Procedia PDF Downloads 117530 Early Prediction of Diseases in a Cow for Cattle Industry
Authors: Ghufran Ahmed, Muhammad Osama Siddiqui, Shahbaz Siddiqui, Rauf Ahmad Shams Malick, Faisal Khan, Mubashir Khan
Abstract:
In this paper, a machine learning-based approach for early prediction of diseases in cows is proposed. Different ML algos are applied to extract useful patterns from the available dataset. Technology has changed today’s world in every aspect of life. Similarly, advanced technologies have been developed in livestock and dairy farming to monitor dairy cows in various aspects. Dairy cattle monitoring is crucial as it plays a significant role in milk production around the globe. Moreover, it has become necessary for farmers to adopt the latest early prediction technologies as the food demand is increasing with population growth. This highlight the importance of state-ofthe-art technologies in analyzing how important technology is in analyzing dairy cows’ activities. It is not easy to predict the activities of a large number of cows on the farm, so, the system has made it very convenient for the farmers., as it provides all the solutions under one roof. The cattle industry’s productivity is boosted as the early diagnosis of any disease on a cattle farm is detected and hence it is treated early. It is done on behalf of the machine learning output received. The learning models are already set which interpret the data collected in a centralized system. Basically, we will run different algorithms on behalf of the data set received to analyze milk quality, and track cows’ health, location, and safety. This deep learning algorithm draws patterns from the data, which makes it easier for farmers to study any animal’s behavioral changes. With the emergence of machine learning algorithms and the Internet of Things, accurate tracking of animals is possible as the rate of error is minimized. As a result, milk productivity is increased. IoT with ML capability has given a new phase to the cattle farming industry by increasing the yield in the most cost-effective and time-saving manner.Keywords: IoT, machine learning, health care, dairy cows
Procedia PDF Downloads 71529 Empirical Superpave Mix-Design of Rubber-Modified Hot-Mix Asphalt in Railway Sub-Ballast
Authors: Fernando M. Soto, Gaetano Di Mino
Abstract:
The design of an unmodified bituminous mixture and three rubber-aggregate mixtures containing rubber-aggregate by a dry process (RUMAC) was evaluated, using an empirical-analytical approach based on experimental findings obtained in the laboratory with the volumetric mix design by gyratory compaction. A reference dense-graded bituminous sub-ballast mixture (3% of air voids and a bitumen 4% over the total weight of the mix), and three rubberized mixtures by dry process (1,5 to 3% of rubber by total weight and 5-7% of binder) were used applying the Superpave mix-design for a level 3 (high-traffic) design rail lines. The railway trackbed section analyzed was a granular layer of 19 cm compacted, while for the sub-ballast a thickness of 12 cm has been used. In order to evaluate the effect of increasing the specimen density (as a percent of its theoretical maximum specific gravity), in this article, are illustrated the results obtained after different comparative analysis into the influence of varying the binder-rubber percentages under the sub-ballast layer mix-design. This work demonstrates that rubberized blends containing crumb and ground rubber in bituminous asphalt mixtures behave at least similar or better than conventional asphalt materials. By using the same methodology of volumetric compaction, the densification curves resulting from each mixture have been studied. The purpose is to obtain an optimum empirical parameter multiplier of the number of gyrations necessary to reach the same compaction energy as in conventional mixtures. It has provided some experimental parameters adopting an empirical-analytical method, evaluating the results obtained from the gyratory-compaction of bituminous mixtures with an HMA and rubber-aggregate blends. An extensive integrated research has been carried out to assess the suitability of rubber-modified hot mix asphalt mixtures as a sub-ballast layer in railway underlayment trackbed. Design optimization of the mixture was conducted for each mixture and the volumetric properties analyzed. Also, an improved and complete manufacturing process, compaction and curing of these blends are provided. By adopting this increase-parameters of compaction, called 'beta' factor, mixtures modified with rubber with uniform densification and workability are obtained that in the conventional mixtures. It is found that considering the usual bearing capacity requirements in rail track, the optimal rubber content is 2% (by weight) or 3.95% (by volumetric substitution) and a binder content of 6%.Keywords: empirical approach, rubber-asphalt, sub-ballast, superpave mix-design
Procedia PDF Downloads 368528 Bandgap Engineering of CsMAPbI3-xBrx Quantum Dots for Intermediate Band Solar Cell
Authors: Deborah Eric, Abbas Ahmad Khan
Abstract:
Lead halide perovskites quantum dots have attracted immense scientific and technological interest for successful photovoltaic applications because of their remarkable optoelectronic properties. In this paper, we have simulated CsMAPbI3-xBrx based quantum dots to implement their use in intermediate band solar cells (IBSC). These types of materials exhibit optical and electrical properties distinct from their bulk counterparts due to quantum confinement. The conceptual framework provides a route to analyze the electronic properties of quantum dots. This layer of quantum dots optimizes the position and bandwidth of IB that lies in the forbidden region of the conventional bandgap. A three-dimensional MAPbI3 quantum dot (QD) with geometries including spherical, cubic, and conical has been embedded in the CsPbBr3 matrix. Bound energy wavefunction gives rise to miniband, which results in the formation of IB. If there is more than one miniband, then there is a possibility of having more than one IB. The optimization of QD size results in more IBs in the forbidden region. One band time-independent Schrödinger equation using the effective mass approximation with step potential barrier is solved to compute the electronic states. Envelope function approximation with BenDaniel-Duke boundary condition is used in combination with the Schrödinger equation for the calculation of eigen energies and Eigen energies are solved for the quasi-bound states using an eigenvalue study. The transfer matrix method is used to study the quantum tunneling of MAPbI3 QD through neighbor barriers of CsPbI3. Electronic states are computed using Schrödinger equation with effective mass approximation by considering quantum dot and wetting layer assembly. Results have shown the varying the quantum dot size affects the energy pinning of QD. Changes in the ground, first, second state energies have been observed. The QD is non-zero at the center and decays exponentially to zero at boundaries. Quasi-bound states are characterized by envelope functions. It has been observed that conical quantum dots have maximum ground state energy at a small radius. Increasing the wetting layer thickness exhibits energy signatures similar to bulk material for each QD size.Keywords: perovskite, intermediate bandgap, quantum dots, miniband formation
Procedia PDF Downloads 165527 Computer-Aided Ship Design Approach for Non-Uniform Rational Basis Spline Based Ship Hull Surface Geometry
Authors: Anu S. Nair, V. Anantha Subramanian
Abstract:
This paper presents a surface development and fairing technique combining the features of a modern computer-aided design tool namely the Non-Uniform Rational Basis Spline (NURBS) with an algorithm to obtain a rapidly faired hull form. Some of the older series based designs give sectional area distribution such as in the Wageningen-Lap Series. Others such as the FORMDATA give more comprehensive offset data points. Nevertheless, this basic data still requires fairing to obtain an acceptable faired hull form. This method uses the input of sectional area distribution as an example and arrives at the faired form. Characteristic section shapes define any general ship hull form in the entrance, parallel mid-body and run regions. The method defines a minimum of control points at each section and using the Golden search method or the bisection method; the section shape converges to the one with the prescribed sectional area with a minimized error in the area fit. The section shapes combine into evolving the faired surface by NURBS and typically takes 20 iterations. The advantage of the method is that it is fast, robust and evolves the faired hull form through minimal iterations. The curvature criterion check for the hull lines shows the evolution of the smooth faired surface. The method is applicable to hull form from any parent series and the evolved form can be evaluated for hydrodynamic performance as is done in more modern design practice. The method can handle complex shape such as that of the bulbous bow. Surface patches developed fit together at their common boundaries with curvature continuity and fairness check. The development is coded in MATLAB and the example illustrates the development of the method. The most important advantage is quick time, the rapid iterative fairing of the hull form.Keywords: computer-aided design, methodical series, NURBS, ship design
Procedia PDF Downloads 169526 Association between Dental Caries and Asthma among 12-15 Years Old School Children Studying in Karachi, Pakistan: A Cross Sectional Study
Authors: Wajeeha Zahid, Shafquat Rozi, Farhan Raza, Masood Kadir
Abstract:
Background: Dental caries affects the overall health and well-being of children. Findings from various international studies regarding the association of dental caries with asthma are inconsistent. With the increasing burden of caries and childhood asthma, it becomes imperative for an underdeveloped country like Pakistan where resources are limited to identify whether there is a relationship between the two. This study aims to identify an association between dental caries and asthma. Methods: A cross-sectional study was conducted on 544 children aged 12-15 years recruited from five private schools in Karachi. Information on asthma was collected through the International Study of Asthma and Allergies in Childhood (ISAAC) questionnaire. The questionnaire addressed questions regarding child’s demographics, physician diagnoses of asthma, type of medication administered, family history of asthma and allergies, dietary habits and oral hygiene behavior. Dental caries was assessed using DMFT Index (Decayed, Missing, Filled teeth) index The data was analyzed using Cox proportional Hazard algorithm and crude and adjusted prevalence ratios with 95% CI were reported. Results: This study comprises of 306 (56.3%) boys and 238 (43.8%) girls. The mean age of children was 13.2 ± (0.05) years. The total number of children with carious teeth (DMFT > 0) were 166/544 (30.5%), and the decayed component contributed largely (22.8%) to the DMFT score. The prevalence of physician’s diagnosed asthma was 13%. This study identified almost 7% asthmatic children using the internationally validated International Study of Asthma and Allergies in Childhood (ISAAC) tool and 8 children with childhood asthma were identified by parent interviews. Overall prevalence of asthma was 109/544 (20%). The prevalence of caries in asthmatic children was 28.4% as compared to 31% among non-asthmatic children. The adjusted prevalence ratio of dental caries in asthmatic children was 0.8 (95% CI 0.59-1.29). After adjusting for carious food intake, age, oral hygiene index and dentist visit, the association between asthma and dental caries turned out to be non-significant. Conclusion: There was no association between asthma and dental caries among children who participated in this study.Keywords: asthma, caries, children, school-based
Procedia PDF Downloads 246525 An Experimental Machine Learning Analysis on Adaptive Thermal Comfort and Energy Management in Hospitals
Authors: Ibrahim Khan, Waqas Khalid
Abstract:
The Healthcare sector is known to consume a higher proportion of total energy consumption in the HVAC market owing to an excessive cooling and heating requirement in maintaining human thermal comfort in indoor conditions, catering to patients undergoing treatment in hospital wards, rooms, and intensive care units. The indoor thermal comfort conditions in selected hospitals of Islamabad, Pakistan, were measured on a real-time basis with the collection of first-hand experimental data using calibrated sensors measuring Ambient Temperature, Wet Bulb Globe Temperature, Relative Humidity, Air Velocity, Light Intensity and CO2 levels. The Experimental data recorded was analyzed in conjunction with the Thermal Comfort Questionnaire Surveys, where the participants, including patients, doctors, nurses, and hospital staff, were assessed based on their thermal sensation, acceptability, preference, and comfort responses. The Recorded Dataset, including experimental and survey-based responses, was further analyzed in the development of a correlation between operative temperature, operative relative humidity, and other measured operative parameters with the predicted mean vote and adaptive predicted mean vote, with the adaptive temperature and adaptive relative humidity estimated using the seasonal data set gathered for both summer – hot and dry, and hot and humid as well as winter – cold and dry, and cold and humid climate conditions. The Machine Learning Logistic Regression Algorithm was incorporated to train the operative experimental data parameters and develop a correlation between patient sensations and the thermal environmental parameters for which a new ML-based adaptive thermal comfort model was proposed and developed in our study. Finally, the accuracy of our model was determined using the K-fold cross-validation.Keywords: predicted mean vote, thermal comfort, energy management, logistic regression, machine learning
Procedia PDF Downloads 63524 Search of Сompounds with Antimicrobial and Antifungal Activity in the Series of 1-(2-(1H-Tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas
Authors: O. Antypenko, I. Vasilieva, S. Kovalenko
Abstract:
Investigations for new effective and less toxic antimicrobials agents are always up-to-date. The tetrazole derivatives are quite interesting objects as for synthesis as well as for pharmacological screening. Thus, some derivatives of tetrazole demonstrated antimicrobial activity, namely 5-phenyl-tetrazolo[1,5-c]quinazoline was effective one against Staphylococcus aureus and Esherichia faecalis (MIC = 250 mg/L). Besides, investigation of the 9-bromo(chloro)-5-morpholin(piperidine)-4-yl-tetrazolo[1,5-c]quinazoline’s antimicrobial activity against Esherichia coli and Enterococcus faecalis, Pseudomonas aeruginosa and Staphylococcus aureus revealed that sensitivity of Gram-positive bacteria to the compounds was higher than that of Gram-negative bacteria. So, our previously synthesized, 31 derivatives of 1-(2-(1H-tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas were decided to test for their in vitro antibacterial activity against Gram-positive bacteria (Staphylococcus aureus ATCC 25923, Enterobacter aerogenes, Enterococcus faecalis ATCC 29212), Gram-negative bacteria (Pseudomonas aeruginosa ATCC 9027, Escherichia coli ATCC 25922, Klebsiella pneumoniae 68) and antifungal properties against Candida albicans ATCC 885653. Agar-diffusion method was used for determination of the preliminary activity compared to well-known reference antimicrobials. All the compounds were dissolved in DMSO at a concentration of 100 μg/disk, using inhibition zone diameter (IZD, mm) as a measure for the antimicrobial activity. The most active turned to be 3 structures, that inhibited several bacterial strains: 1-ethyl-3-(5-fluoro-2-(1H-tetrazol-5-yl)phenyl)urea (1), 1-(4-bromo-2-(1H-tetrazol-5-yl)-phenyl)-3-(4-(trifluoromethyl)phenyl)urea (2) and 1-(4-chloro-2-(1H-tetrazol-5-yl)phenyl)-3-(3-(trifluoromethyl)phenyl)urea (3). IZM (mm) was 40 (Escherichia coli), 25 (Klebsiella pneumonia) for compound 1; 12 (Pseudomonas aeruginosa), 15 (Staphylococcus aureus), 10 (Enterococcus faecalis) for compound 2; 25 (Staphylococcus aureus), 15 (Enterococcus faecalis) for compound 3. The most sensitive to the activity of the substances were Gram-negative bacteria Pseudomonas aeruginosa. While none of compound effected on Candida albicans. Speaking about, reference drugs: Amikacin (30 µg/disk) showed 27 and Ceftazide (30 µg/disk) 25 against Pseudomonas aeruginosa. That is, unfortunately, higher than studied 1-(2-(1H-tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas. Obtained results will be used for further purposeful optimization of the leading compounds in the more effective antimicrobials because of the ever-mounting problem of microorganism’s resistance.Keywords: antimicrobial, antifungal, compounds, 1-(2-(1H-tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas
Procedia PDF Downloads 361523 Standardizing and Achieving Protocol Objectives for ChestWall Radiotherapy Treatment Planning Process using an O-ring Linac in High-, Low- and Middle-income Countries
Authors: Milton Ixquiac, Erick Montenegro, Francisco Reynoso, Matthew Schmidt, Thomas Mazur, Tianyu Zhao, Hiram Gay, Geoffrey Hugo, Lauren Henke, Jeff Michael Michalski, Angel Velarde, Vicky de Falla, Franky Reyes, Osmar Hernandez, Edgar Aparicio Ruiz, Baozhou Sun
Abstract:
Purpose: Radiotherapy departments in low- and middle-income countries (LMICs) like Guatemala have recently introduced intensity-modulated radiotherapy (IMRT). IMRT has become the standard of care in high-income countries (HIC) due to reduced toxicity and improved outcomes in some cancers. The purpose of this work is to show the agreement between the dosimetric results shown in the Dose Volume Histograms (DVH) to the objectives proposed in the adopted protocol. This is the initial experience with an O-ring Linac. Methods and Materials: An O-Linac Linac was installed at our clinic in Guatemala in 2019 and has been used to treat approximately 90 patients daily with IMRT. This Linac is a completely Image Guided Device since to deliver each radiotherapy session must take a Mega Voltage Cone Beam Computerized Tomography (MVCBCT). In each MVCBCT, the Linac deliver 9 UM, and they are taken into account while performing the planning. To start the standardization, the TG263 was employed in the nomenclature and adopted a hypofractionated protocol to treat ChestWall, including supraclavicular nodes achieving 40.05Gy in 15 fractions. The planning was developed using 4 semiarcs from 179-305 degrees. The planner must create optimization volumes for targets and Organs at Risk (OARs); the difficulty for the planner was the dose base due to the MVCBCT. To evaluate the planning modality, we used 30 chestwall cases. Results: The plans created manually achieve the protocol objectives. The protocol objectives are the same as the RTOG1005, and the DHV curves look clinically acceptable. Conclusions: Despite the O-ring Linac doesn´t have the capacity to obtain kv images, the cone beam CT was created using MV energy, the dose delivered by the daily image setup process still without affect the dosimetric quality of the plans, and the dose distribution is acceptable achieving the protocol objectives.Keywords: hypofrationation, VMAT, chestwall, radiotherapy planning
Procedia PDF Downloads 118522 The Layout Analysis of Handwriting Characters and the Fusion of Multi-style Ancient Books’ Background
Authors: Yaolin Tian, Shanxiong Chen, Fujia Zhao, Xiaoyu Lin, Hailing Xiong
Abstract:
Ancient books are significant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufficient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overfitting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the field make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversified samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different fine-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientifically, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.Keywords: deep learning, image fusion, image generation, layout analysis
Procedia PDF Downloads 157521 Spark Plasma Sintering/Synthesis of Alumina-Graphene Composites
Authors: Nikoloz Jalabadze, Roin Chedia, Lili Nadaraia, Levan Khundadze
Abstract:
Nanocrystalline materials in powder condition can be manufactured by a number of different methods, however manufacture of composite materials product in the same nanocrystalline state is still a problem because the processes of compaction and synthesis of nanocrystalline powders go with intensive growth of particles – the process which promotes formation of pieces in an ordinary crystalline state instead of being crystallized in the desirable nanocrystalline state. To date spark plasma sintering (SPS) has been considered as the most promising and energy efficient method for producing dense bodies of composite materials. An advantage of the SPS method in comparison with other methods is mainly low temperature and short time of the sintering procedure. That finally gives an opportunity to obtain dense material with nanocrystalline structure. Graphene has recently garnered significant interest as a reinforcing phase in composite materials because of its excellent electrical, thermal and mechanical properties. Graphene nanoplatelets (GNPs) in particular have attracted much interest as reinforcements for ceramic matrix composites (mostly in Al2O3, Si3N4, TiO2, ZrB2 a. c.). SPS has been shown to fully densify a variety of ceramic systems effectively including Al2O3 and often with improvements in mechanical and functional behavior. Alumina consolidated by SPS has been shown to have superior hardness, fracture toughness, plasticity and optical translucency compared to conventionally processed alumina. Knowledge of how GNPs influence sintering behavior is important to effectively process and manufacture process. In this study, the effects of GNPs on the SPS processing of Al2O3 are investigated by systematically varying sintering temperature, holding time and pressure. Our experiments showed that SPS process is also appropriate for the synthesis of nanocrystalline powders of alumina-graphene composites. Depending on the size of the molds, it is possible to obtain different amount of nanopowders. Investigation of the structure, physical-chemical, mechanical and performance properties of the elaborated composite materials was performed. The results of this study provide a fundamental understanding of the effects of GNP on sintering behavior, thereby providing a foundation for future optimization of the processing of these promising nanocomposite systems.Keywords: alumina oxide, ceramic matrix composites, graphene nanoplatelets, spark-plasma sintering
Procedia PDF Downloads 377520 A Multi-criteria Decision Method For The Recruitment Of Academic Personnel Based On The Analytical Hierarchy Process And The Delphi Method In A Neutrosophic Environment (Full Text)
Authors: Antonios Paraskevas, Michael Madas
Abstract:
For a university to maintain its international competitiveness in education, it is essential to recruit qualitative academic staff as it constitutes its most valuable asset. This selection demonstrates a significant role in achieving strategic objectives, particularly by emphasizing a firm commitment to exceptional student experience and innovative teaching and learning practices of high quality. In this vein, the appropriate selection of academic staff establishes a very important factor of competitiveness, efficiency and reputation of an academic institute. Within this framework, our work demonstrates a comprehensive methodological concept that emphasizes on the multi-criteria nature of the problem and on how decision makers could utilize our approach in order to proceed to the appropriate judgment. The conceptual framework introduced in this paper is built upon a hybrid neutrosophic method based on the Neutrosophic Analytical Hierarchy Process (N-AHP), which uses the theory of neutrosophy sets and is considered suitable in terms of significant degree of ambiguity and indeterminacy observed in decision-making process. To this end, our framework extends the N-AHP by incorporating the Neutrosophic Delphi Method (N-DM). By applying the N-DM, we can take into consideration the importance of each decision-maker and their preferences per evaluation criterion. To the best of our knowledge, the proposed model is the first which applies Neutrosophic Delphi Method in the selection of academic staff. As a case study, it was decided to use our method to a real problem of academic personnel selection, having as main goal to enhance the algorithm proposed in previous scholars’ work, and thus taking care of the inherit ineffectiveness which becomes apparent in traditional multi-criteria decision-making methods when dealing with situations alike. As a further result, we prove that our method demonstrates greater applicability and reliability when compared to other decision models.Keywords: analytical hierarchy process, delphi method, multi-criteria decision maiking method, neutrosophic set theory, personnel recruitment
Procedia PDF Downloads 200519 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.Keywords: cyber security, vulnerability detection, neural networks, feature extraction
Procedia PDF Downloads 90518 Cooperative Learning Promotes Successful Learning. A Qualitative Study to Analyze Factors that Promote Interaction and Cooperation among Students in Blended Learning Environments
Authors: Pia Kastl
Abstract:
Potentials of blended learning are the flexibility of learning and the possibility to get in touch with lecturers and fellow students on site. By combining face-to-face sessions with digital self-learning units, the learning process can be optimized, and learning success increased. To examine wether blended learning outperforms online and face-to-face teaching, a theory-based questionnaire survey was conducted. The results show that the interaction and cooperation among students is poorly provided in blended learning, and face-to-face teaching performs better in this respect. The aim of this article is to identify concrete suggestions students have for improving cooperation and interaction in blended learning courses. For this purpose, interviews were conducted with students from various academic disciplines in face-to-face, online, or blended learning courses (N= 60). The questions referred to opinions and suggestions for improvement regarding the course design of the respective learning environment. The analysis was carried out by qualitative content analysis. The results show that students perceive the interaction as beneficial to their learning. They verbalize their knowledge and are exposed to different perspectives. In addition, emotional support is particularly important in exam phases. Interaction and cooperation were primarily enabled in the face-to-face component of the courses studied, while there was very limited contact with fellow students in the asynchronous component. Forums offered were hardly used or not used at all because the barrier to asking a question publicly is too high, and students prefer private channels for communication. This is accompanied by the disadvantage that the interaction occurs only among people who already know each other. Creating contacts is not fostered in the blended learning courses. Students consider optimization possibilities as a task of the lecturers in the face-to-face sessions: Here, interaction and cooperation should be encouraged through get-to-know-you rounds or group work. It is important here to group the participants randomly to establish contact with new people. In addition, sufficient time for interaction is desired in the lecture, e.g., in the context of discussions or partner work. In the digital component, students prefer synchronous exchange at a fixed time, for example, in breakout rooms or an MS Teams channel. The results provide an overview of how interaction and cooperation can be implemented in blended learning courses. Positive design possibilities are partly dependent on subject area and course. Future studies could tie in here with a course-specific analysis.Keywords: blended learning, higher education, hybrid teaching, qualitative research, student learning
Procedia PDF Downloads 70517 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging
Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati
Abstract:
Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization
Procedia PDF Downloads 74516 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction
Authors: Talal Alsulaiman, Khaldoun Khashanah
Abstract:
In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent's attributes. Also, the influence of social networks in the developing of agents’ interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.Keywords: artificial stock markets, market dynamics, bounded rationality, agent based simulation, learning, interaction, social networks
Procedia PDF Downloads 354515 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models
Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu
Abstract:
Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging
Procedia PDF Downloads 156