Search results for: complex network platform
7864 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 1047863 Self-Action of Pyroelectric Spatial Soliton in Undoped Lithium Niobate Samples with Pyroelectric Mechanism of Nonlinear Response
Authors: Anton S. Perin, Vladimir M. Shandarov
Abstract:
Compensation for the nonlinear diffraction of narrow laser beams with wavelength of 532 and the formation of photonic waveguides and waveguide circuits due to the contribution of pyroelectric effect to the nonlinear response of lithium niobate crystal have been experimentally demonstrated. Complete compensation for the linear and nonlinear diffraction broadening of light beams is obtained upon uniform heating of an undoped sample from room temperature to 55 degrees Celsius. An analysis of the light-field distribution patterns and the corresponding intensity distribution profiles allowed us to estimate the spacing for the channel waveguides. The observed behavior of bright soliton beams may be caused by their coherent interaction, which manifests itself in repulsion for anti-phase light fields and in attraction for in-phase light fields. The experimental results of this study showed a fundamental possibility of forming optically complex waveguide structures in lithium niobate crystals with pyroelectric mechanism of nonlinear response. The topology of these structures is determined by the light field distribution on the input face of crystalline sample. The optical induction of channel waveguide elements by interacting spatial solitons makes it possible to design optical systems with a more complex topology and a possibility of their dynamic reconfiguration.Keywords: self-action, soliton, lithium niobate, piroliton, photorefractive effect, pyroelectric effect
Procedia PDF Downloads 1717862 Reservoir Inflow Prediction for Pump Station Using Upstream Sewer Depth Data
Authors: Osung Im, Neha Yadav, Eui Hoon Lee, Joong Hoon Kim
Abstract:
Artificial Neural Network (ANN) approach is commonly used in lots of fields for forecasting. In water resources engineering, forecast of water level or inflow of reservoir is useful for various kind of purposes. Due to advantages of ANN, many papers were written for inflow prediction in river networks, but in this study, ANN is used in urban sewer networks. The growth of severe rain storm in Korea has increased flood damage severely, and the precipitation distribution is getting more erratic. Therefore, effective pump operation in pump station is an essential task for the reduction in urban area. If real time inflow of pump station reservoir can be predicted, it is possible to operate pump effectively for reducing the flood damage. This study used ANN model for pump station reservoir inflow prediction using upstream sewer depth data. For this study, rainfall events, sewer depth, and inflow into Banpo pump station reservoir between years of 2013-2014 were considered. Feed – Forward Back Propagation (FFBF), Cascade – Forward Back Propagation (CFBP), Elman Back Propagation (EBP) and Nonlinear Autoregressive Exogenous (NARX) were used as ANN model for prediction. A comparison of results with ANN model suggests that ANN is a powerful tool for inflow prediction using the sewer depth data.Keywords: artificial neural network, forecasting, reservoir inflow, sewer depth
Procedia PDF Downloads 3237861 Pathway to Sustainable Shipping: Electric Ships
Authors: Wei Wang, Yannick Liu, Lu Zhen, H. Wang
Abstract:
Maritime transport plays an important role in global economic development but also inevitably faces increasing pressures from all sides, such as ship operating cost reduction and environmental protection. An ideal innovation to address these pressures is electric ships. The electric ship is in the early stage. Considering the special characteristics of electric ships, i.e., travel range limit, to guarantee the efficient operation of electric ships, the service network needs to be re-designed carefully. This research designs a cost-efficient and environmentally friendly service network for electric ships, including the location of charging stations, charging plan, route planning, ship scheduling, and ship deployment. The problem is formulated as a mixed-integer linear programming model with the objective of minimizing total cost comprised of charging cost, the construction cost of charging stations, and fixed cost of ships. A case study using data of the shipping network along the Yangtze River is conducted to evaluate the performance of the model. Two operating scenarios are used: an electric ship scenario where all the transportation tasks are fulfilled by electric ships and a conventional ship scenario where all the transportation tasks are fulfilled by fuel oil ships. Results unveil that the total cost of using electric ships is only 42.8% of using conventional ships. Using electric ships can reduce 80% SOx, 93.47% NOx, 89.47% PM, and 42.62% CO2, but will consume 2.78% more time to fulfill all the transportation tasks. Extensive sensitivity analyses are also conducted for key operating factors, including battery capacity, charging speed, volume capacity, and a service time limit of transportation task. Implications from the results are as follows: 1) it is necessary to equip the ship with a large capacity battery when the number of charging stations is low; 2) battery capacity will influence the number of ships deployed on each route; 3) increasing battery capacity will make the electric ship more cost-effective; 4) charging speed does not affect charging amount and location of charging station, but will influence the schedule of ships on each route; 5) there exists an optimal volume capacity, at which all costs and total delivery time are lowest; 6) service time limit will influence ship schedule and ship cost.Keywords: cost reduction, electric ship, environmental protection, sustainable shipping
Procedia PDF Downloads 847860 An Electrocardiography Deep Learning Model to Detect Atrial Fibrillation on Clinical Application
Authors: Jui-Chien Hsieh
Abstract:
Background:12-lead electrocardiography(ECG) is one of frequently-used tools to detect atrial fibrillation (AF), which might degenerate into life-threaten stroke, in clinical Practice. Based on this study, the AF detection by the clinically-used 12-lead ECG device has only 0.73~0.77 positive predictive value (ppv). Objective: It is on great demand to develop a new algorithm to improve the precision of AF detection using 12-lead ECG. Due to the progress on artificial intelligence (AI), we develop an ECG deep model that has the ability to recognize AF patterns and reduce false-positive errors. Methods: In this study, (1) 570-sample 12-lead ECG reports whose computer interpretation by the ECG device was AF were collected as the training dataset. The ECG reports were interpreted by 2 senior cardiologists, and confirmed that the precision of AF detection by the ECG device is 0.73.; (2) 88 12-lead ECG reports whose computer interpretation generated by the ECG device was AF were used as test dataset. Cardiologist confirmed that 68 cases of 88 reports were AF, and others were not AF. The precision of AF detection by ECG device is about 0.77; (3) A parallel 4-layer 1 dimensional convolutional neural network (CNN) was developed to identify AF based on limb-lead ECGs and chest-lead ECGs. Results: The results indicated that this model has better performance on AF detection than traditional computer interpretation of the ECG device in 88 test samples with 0.94 ppv, 0.98 sensitivity, 0.80 specificity. Conclusions: As compared to the clinical ECG device, this AI ECG model promotes the precision of AF detection from 0.77 to 0.94, and can generate impacts on clinical applications.Keywords: 12-lead ECG, atrial fibrillation, deep learning, convolutional neural network
Procedia PDF Downloads 1177859 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant
Authors: John K. Avor, Choong-Koo Chang
Abstract:
The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability
Procedia PDF Downloads 1757858 Classification of EEG Signals Based on Dynamic Connectivity Analysis
Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović
Abstract:
In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients
Procedia PDF Downloads 2187857 Resilience of Infrastructure Networks: Maintenance of Bridges in Mountainous Environments
Authors: Lorenza Abbracciavento, Valerio De Biagi
Abstract:
Infrastructures are key elements to ensure the operational functionality of the transport system. The collapse of a single bridge or, equivalently, a tunnel can leads an entire motorway to be considered completely inaccessible. As a consequence, the paralysis of the communications network determines several important drawbacks for the community. Recent chronicle events have demonstrated that ensuring the functional continuity of the strategic infrastructures during and after a catastrophic event makes a significant difference in terms of life and economical losses. Moreover, it has been observed that RC structures located in mountain environments show a worst state of conservation compared to the same typology and aging structures located in temperate climates. Because of its morphology, in fact, the mountain environment is particularly exposed to severe collapse and deterioration phenomena, generally: natural hazards, e.g. rock falls, and meteorological hazards, e.g. freeze-thaw cycles or heavy snows. For these reasons, deep investigation on the characteristics of these processes becomes of fundamental importance to provide smart and sustainable solutions and make the infrastructure system more resilient. In this paper, the design of a monitoring system in mountainous environments is presented and analyzed in its parts. The method not only takes into account the peculiar climatic conditions, but it is integrated and interacts with the environment surrounding.Keywords: structural health monitoring, resilience of bridges, mountain infrastructures, infrastructural network, maintenance
Procedia PDF Downloads 807856 Generation and Migration of Carbone Dioxide in the Lower Cretaceous Bahi Sandstone Reservoir Within the En Naga Sub-Basin, Sirte Basin, Libya
Authors: Moaawia Abdulgader Gdara
Abstract:
En Naga sub - basin considered the most southern of the concessions in the Sirte Basin operated by HOO. En Naga Sub-basin has likely been point-sourced of CO₂ accumulations during the last 7 million years from local satellite intrusives associated with the Haruj Al Aswad igneous complex. CO₂ occurs in the En Naga Sub-basin as a result of the igneous activity of the Al Harouge Al Aswad complex. Igneous extrusives have been pierced in the subsurface and are exposed to the surface. The lower cretaceous Bahi Sandstone facies are recognized in the En Naga Sub-basin. In the Lower Cretaceous Bahi Sandstones, the presence of trapped carbon dioxide is proven within the En Naga Sub-basin. This makes it unique in providing an abundance of CO₂ gas reservoirs with almost pure magmatic CO₂, which can be easily sampled. Huge amounts of CO₂ exist in the Lower Cretaceous Bahi Sandstones in the En Naga sub-basin, where the economic value of CO₂ is related to its use for enhanced oil recovery (EOR). Based on the production tests for the drilled wells that make Lower Cretaceous Bahi sandstones the principal reservoir rocks for CO₂ where large volumes of CO₂ gas have been discovered in the Bahi Formation on and near Concession 72 (En Naga sub-basin). The Bahi sandstones are generally described as a good reservoir rock. Intergranular porosities and permeabilities are highly variable and can exceed 25% and 100 MD. In the (En Naga sub-basin), three main developed structures (Barrut I, En Naga A, and En Naga O) are thought to be prospective for the lower Cretaceous Bahi sandstone reservoir. These structures represent a good example of the deep over-pressure potential in (the En Naga sub-basin). The very high pressures assumed to be associated with local igneous intrusives may account for the abnormally high Bahi (and Lidam) reservoir pressures. The best gas tests from these facies are at F1-72 on the (Barrut I structure) from part of a 458 feet+ section having an estimated high value of CO₂ as 98% overpressured. Bahi CO₂ prospectivity is thought to be excellent in the central to western areas where At U1-72 (En Naga O structure). A significant CO₂ gas kick occurred at 11,971 feet and quickly led to blowout conditions due to uncontrollable leaks in the surface equipment, which reflects better reservoir quality sandstones associated with Paleostructural highs. Condensate and gas prospectivity increases to the east as the CO₂ prospectivity decreases with distance away from the Al Haruj Al Aswad igneous complex. To date, it has not been possible to accurately determine the volume of these strategically valuable reserves, although there are positive indications that they are very large.Keywords: En Naga Sub Basin, Al Harouge Al Aswad, CO₂ generation and migration in the Bahi sandstone reservoir, lower cretaceous Bahi sandstone
Procedia PDF Downloads 167855 Translation Quality Assessment in Fansubbed English-Chinese Swearwords: A Corpus-Based Study of the Big Bang Theory
Authors: Qihang Jiang
Abstract:
Fansubbing, the combination of fan and subtitling, is one of the main branches of Audiovisual Translation (AVT) having kindled more and more interest of researchers into the AVT field in recent decades. In particular, the quality of so-called non-professional translation seems questionable due to the non-transparent qualification of subtitlers in a huge community network. This paper attempts to figure out how YYeTs aka 'ZiMuZu', the largest fansubbing group in China, translates swearwords from English to Chinese for its fans of the prevalent American sitcom The Big Bang Theory, taking cultural, social and political elements into account in the context of China. By building a bilingual corpus containing both the source and target texts, this paper found that most of the original swearwords were translated in a toned-down manner, probably due to Chinese audiences’ cultural and social network features as well as the strict censorship under the Chinese government. Additionally, House (2015)’s newly revised model of Translation Quality Assessment (TQA) was applied and examined. Results revealed that most of the subtitled swearwords achieved their pragmatic functions and exerted a communicative effect for audiences. In conclusion, this paper enriches the empirical research concerning House’s new TQA model, gives a full picture of the subtitling of swearwords in AVT field and provides a practical guide for the practitioners in their career of subtitling.Keywords: corpus-based approach, fansubbing, pragmatic functions, swearwords, translation quality assessment
Procedia PDF Downloads 1497854 Design and Development of an Autonomous Underwater Vehicle for Irrigation Canal Monitoring
Authors: Mamoon Masud, Suleman Mazhar
Abstract:
Indus river basin’s irrigation system in Pakistan is extremely complex, spanning over 50,000 km. Maintenance and monitoring of this demands enormous resources. This paper describes the development of a streamlined and low-cost autonomous underwater vehicle (AUV) for the monitoring of irrigation canals including water quality monitoring and water theft detection. The vehicle is a hovering-type AUV, designed mainly for monitoring irrigation canals, with fully documented design and open source code. It has a length of 17 inches, and a radius of 3.5 inches with a depth rating of 5m. Multiple sensors are present onboard the AUV for monitoring water quality parameters including pH, turbidity, total dissolved solids (TDS) and dissolved oxygen. A 9-DOF Inertial Measurement Unit (IMU), GY-85, is used, which incorporates an Accelerometer (ADXL345), a Gyroscope (ITG-3200) and a Magnetometer (HMC5883L). The readings from these sensors are fused together using directional cosine matrix (DCM) algorithm, providing the AUV with the heading angle, while a pressure sensor gives the depth of the AUV. 2 sonar-based range sensors are used for obstacle detection, enabling the vehicle to align itself with the irrigation canals edges. 4 thrusters control the vehicle’s surge, heading and heave, providing 3 DOF. The thrusters are controlled using a proportional-integral-derivative (PID) feedback control system, with heading angle and depth being the controller’s input and the thruster motor speed as the output. A flow sensor has been incorporated to monitor canal water level to detect water-theft event in the irrigation system. In addition to water theft detection, the vehicle also provides information on water quality, providing us with the ability to identify the source(s) of water contamination. Detection of such events can provide useful policy inputs for improving irrigation efficiency and reducing water contamination. The AUV being low cost, small sized and suitable for autonomous maneuvering, water level and quality monitoring in the irrigation canals, can be used for irrigation network monitoring at a large scale.Keywords: the autonomous underwater vehicle, irrigation canal monitoring, water quality monitoring, underwater line tracking
Procedia PDF Downloads 1517853 Modelling and Numerical Analysis of Thermal Non-Destructive Testing on Complex Structure
Authors: Y. L. Hor, H. S. Chu, V. P. Bui
Abstract:
Composite material is widely used to replace conventional material, especially in the aerospace industry to reduce the weight of the devices. It is formed by combining reinforced materials together via adhesive bonding to produce a bulk material with alternated macroscopic properties. In bulk composites, degradation may occur in microscopic scale, which is in each individual reinforced fiber layer or especially in its matrix layer such as delamination, inclusion, disbond, void, cracks, and porosity. In this paper, we focus on the detection of defect in matrix layer which the adhesion between the composite plies is in contact but coupled through a weak bond. In fact, the adhesive defects are tested through various nondestructive methods. Among them, pulsed phase thermography (PPT) has shown some advantages providing improved sensitivity, large-area coverage, and high-speed testing. The aim of this work is to develop an efficient numerical model to study the application of PPT to the nondestructive inspection of weak bonding in composite material. The resulting thermal evolution field is comprised of internal reflections between the interfaces of defects and the specimen, and the important key-features of the defects presented in the material can be obtained from the investigation of the thermal evolution of the field distribution. Computational simulation of such inspections has allowed the improvement of the techniques to apply in various inspections, such as materials with high thermal conductivity and more complex structures.Keywords: pulsed phase thermography, weak bond, composite, CFRP, computational modelling, optimization
Procedia PDF Downloads 1807852 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling
Authors: Florin Leon, Silvia Curteanu
Abstract:
Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.Keywords: batch bulk methyl methacrylate polymerization, adaptive sampling, machine learning, large margin nearest neighbor regression
Procedia PDF Downloads 3077851 Global Navigation Satellite System and Precise Point Positioning as Remote Sensing Tools for Monitoring Tropospheric Water Vapor
Authors: Panupong Makvichian
Abstract:
Global Navigation Satellite System (GNSS) is nowadays a common technology that improves navigation functions in our life. Additionally, GNSS is also being employed on behalf of an accurate atmospheric sensor these times. Meteorology is a practical application of GNSS, which is unnoticeable in the background of people’s life. GNSS Precise Point Positioning (PPP) is a positioning method that requires data from a single dual-frequency receiver and precise information about satellite positions and satellite clocks. In addition, careful attention to mitigate various error sources is required. All the above data are combined in a sophisticated mathematical algorithm. At this point, the research is going to demonstrate how GNSS and PPP method is capable to provide high-precision estimates, such as 3D positions or Zenith tropospheric delays (ZTDs). ZTDs combined with pressure and temperature information allows us to estimate the water vapor in the atmosphere as precipitable water vapor (PWV). If the process is replicated for a network of GNSS sensors, we can create thematic maps that allow extract water content information in any location within the network area. All of the above are possible thanks to the advances in GNSS data processing. Therefore, we are able to use GNSS data for climatic trend analysis and acquisition of the further knowledge about the atmospheric water content.Keywords: GNSS, precise point positioning, Zenith tropospheric delays, precipitable water vapor
Procedia PDF Downloads 2027850 Comparing the ‘Urgent Community Care Team’ Clinical Referrals in the Community with Suggestions from the Clinical Decision Support Software Dem DX
Abstract:
Background: Additional demands placed on senior clinical teams with ongoing COVID-19 management has accelerated the need to harness the wider healthcare professional resources and upskill them to take on greater clinical responsibility safely. The UK NHS Long Term Plan (2019)¹ emphasises the importance of expanding Advanced Practitioners’ (APs) roles to take on more clinical diagnostic responsibilities to cope with increased demand. In acute settings, APs are often the first point of care for patients and require training to take on initial triage responsibilities efficiently and safely. Critically, their roles include determining which onward services the patients may require, and assessing whether they can be treated at home, avoiding unnecessary admissions to the hospital. Dem Dx is a Clinical Reasoning Platform (CRP) that claims to help frontline healthcare professionals independently assess and triage patients. It guides the clinician from presenting complaints through associated symptoms to a running list of differential diagnoses, media, national and institutional guidelines. The objective of this study was to compare the clinical referral rates and guidelines adherence registered by the HMR Urgent Community Care Team (UCCT)² and Dem Dx recommendations using retrospective cases. Methodology: 192 cases seen by the UCCT were anonymised and reassessed using Dem Dx clinical pathways. We compared the UCCT’s performance with Dem Dx regarding the appropriateness of onward referrals. We also compared the clinical assessment regarding adherence to NICE guidelines recorded on the clinical notes and the presence of suitable guidance in each case. The cases were audited by two medical doctors. Results: Dem Dx demonstrated appropriate referrals in 85% of cases, compared to 47% in the UCCT team (p<0.001). Of particular note, Dem Dx demonstrated an almost 65% (p<0.001) improvement in the efficacy and appropriateness of referrals in a highly experienced clinical team. The effectiveness of Dem Dx is in part attributable to the relevant NICE and local guidelines found within the platform's pathways and was found to be suitable in 86% of cases. Conclusion: This study highlights the potential of clinical decision support, as Dem Dx, to improve the quality of onward clinical referrals delivered by a multidisciplinary team in primary care. It demonstrated that it could support healthcare professionals in making appropriate referrals, especially those that may be overlooked by providing suitable clinical guidelines directly embedded into cases and clear referral pathways. Further evaluation in the clinical setting has been planned to confirm those assumptions in a prospective study.Keywords: advanced practitioner, clinical reasoning, clinical decision-making, management, multidisciplinary team, referrals, triage
Procedia PDF Downloads 1547849 Starchy Wastewater as Raw Material for Biohydrogen Production by Dark Fermentation: A Review
Authors: Tami A. Ulhiza, Noor I. M. Puad, Azlin S. Azmi, Mohd. I. A. Malek
Abstract:
High amount of chemical oxygen demand (COD) in starchy waste can be harmful to the environment. In common practice, starch processing wastewater is discharged to the river without proper treatment. However, starchy waste still contains complex sugars and organic acids. By the right pretreatment method, the complex sugar can be hydrolyzed into more readily digestible sugars which can be utilized to be converted into more valuable products. At the same time, the global demand of energy is inevitable. The continuous usage of fossil fuel as the main source of energy can lead to energy scarcity. Hydrogen is a renewable form of energy which can be an alternative energy in the future. Moreover, hydrogen is clean and carries the highest energy compared to other fuels. Biohydrogen produced from waste has significant advantages over chemical methods. One of the major problems in biohydrogen production is the raw material cost. The carbohydrate-rich starchy wastes such as tapioca, maize, wheat, potato, and sago wastes is a promising candidate to be used as a substrate in producing biohydrogen. The utilization of those wastes for biohydrogen production can provide cheap energy generation with simultaneous waste treatment. Therefore this paper aims to review variety source of starchy wastes that has been widely used to synthesize biohydrogen. The scope includes the source of waste, the performance in yielding hydrogen, the pretreatment method and the type of culture that is suitable for starchy waste.Keywords: biohydrogen, dark fermentation, renewable energy, starchy waste
Procedia PDF Downloads 2267848 Multifunctional Janus Microbots for Intracellular Delivery of Therapeutic Agents
Authors: Shilpee Jain, Sachin Latiyan, Kaushik Suneet
Abstract:
Unlike traditional robots, medical microbots are not only smaller in size, but they also possess various unique properties, for example, biocompatibility, stability in the biological fluids, navigation opposite to the bloodstream, wireless control over locomotion, etc. The idea behind their usage in the medical field was to build a minimally invasive method for addressing the post-operative complications, including longer recovery time, infection eruption and pain. Herein, the present study demonstrates the fabrication of dual nature magneto-conducting Fe3O4 magnetic nanoparticles (MNPs) and SU8 derived carbon-based Janus microbots for the efficient intracellular delivery of biomolecules. The low aspect ratio with feature size 2-5 μm microbots were fabricated by using a photolithography technique. These microbots were pyrolyzed at 900°C, which converts SU8 into amorphous carbon. The pyrolyzed microbots have dual properties, i.e., the half part is magneto-conducting and another half is only conducting for sufficing the therapeutic payloads efficiently with the application of external electric/magnetic field stimulations. For the efficient intracellular delivery of the microbots, the size and aspect ratio plays a significant role. However, on a smaller scale, the proper control over movement is difficult to achieve. The dual nature of Janus microbots allowed to control its maneuverability in the complex fluids using external electric as well as the magnetic field. Interestingly, Janus microbots move faster with the application of an external electric field (44 µm/s) as compared to the magnetic field (18 µm/s) application. Furthermore, these Janus microbots exhibit auto-fluorescence behavior that will help to track their pathway during navigation. Typically, the use of MNPs in the microdevices enhances the tendency to agglomerate. However, the incorporation of Fe₃O₄ MNPs in the pyrolyzed carbon reduces the chances of agglomeration of the microbots. The biocompatibility of the medical microbots, which is the essential property of any biosystems, was determined in vitro using HeLa cells. The microbots were found to compatible with HeLa cells. Additionally, the intracellular uptake of microbots was higher in the presence of an external electric field as compared to without electric field stimulation. In summary, the cytocompatible Janus microbots were fabricated successfully. They are stable in the biological fluids, wireless controllable navigation with the help of a few Guess external magnetic fields, their movement can be tracked because of autofluorescence behavior, they are less susceptible to agglomeration and higher cellular uptake could be achieved with the application of the external electric field. Thus, these carriers could offer a versatile platform to suffice the therapeutic payloads under wireless actuation.Keywords: amorphous carbon, electric/magnetic stimulations, Janus microbots, magnetic nanoparticles, minimally invasive procedures
Procedia PDF Downloads 1287847 Design of a Real Time Closed Loop Simulation Test Bed on a General Purpose Operating System: Practical Approaches
Authors: Pratibha Srivastava, Chithra V. J., Sudhakar S., Nitin K. D.
Abstract:
A closed-loop system comprises of a controller, a response system, and an actuating system. The controller, which is the system under test for us, excites the actuators based on feedback from the sensors in a periodic manner. The sensors should provide the feedback to the System Under Test (SUT) within a deterministic time post excitation of the actuators. Any delay or miss in the generation of response or acquisition of excitation pulses may lead to control loop controller computation errors, which can be catastrophic in certain cases. Such systems categorised as hard real-time systems that need special strategies. The real-time operating systems available in the market may be the best solutions for such kind of simulations, but they pose limitations like the availability of the X Windows system, graphical interfaces, other user tools. In this paper, we present strategies that can be used on a general purpose operating system (Bare Linux Kernel) to achieve a deterministic deadline and hence have the added advantages of a GPOS with real-time features. Techniques shall be discussed how to make the time-critical application run with the highest priority in an uninterrupted manner, reduced network latency for distributed architecture, real-time data acquisition, data storage, and retrieval, user interactions, etc.Keywords: real time data acquisition, real time kernel preemption, scheduling, network latency
Procedia PDF Downloads 1517846 Enhancing Plant Throughput in Mineral Processing Through Multimodal Artificial Intelligence
Authors: Muhammad Bilal Shaikh
Abstract:
Mineral processing plants play a pivotal role in extracting valuable minerals from raw ores, contributing significantly to various industries. However, the optimization of plant throughput remains a complex challenge, necessitating innovative approaches for increased efficiency and productivity. This research paper investigates the application of Multimodal Artificial Intelligence (MAI) techniques to address this challenge, aiming to improve overall plant throughput in mineral processing operations. The integration of multimodal AI leverages a combination of diverse data sources, including sensor data, images, and textual information, to provide a holistic understanding of the complex processes involved in mineral extraction. The paper explores the synergies between various AI modalities, such as machine learning, computer vision, and natural language processing, to create a comprehensive and adaptive system for optimizing mineral processing plants. The primary focus of the research is on developing advanced predictive models that can accurately forecast various parameters affecting plant throughput. Utilizing historical process data, machine learning algorithms are trained to identify patterns, correlations, and dependencies within the intricate network of mineral processing operations. This enables real-time decision-making and process optimization, ultimately leading to enhanced plant throughput. Incorporating computer vision into the multimodal AI framework allows for the analysis of visual data from sensors and cameras positioned throughout the plant. This visual input aids in monitoring equipment conditions, identifying anomalies, and optimizing the flow of raw materials. The combination of machine learning and computer vision enables the creation of predictive maintenance strategies, reducing downtime and improving the overall reliability of mineral processing plants. Furthermore, the integration of natural language processing facilitates the extraction of valuable insights from unstructured textual data, such as maintenance logs, research papers, and operator reports. By understanding and analyzing this textual information, the multimodal AI system can identify trends, potential bottlenecks, and areas for improvement in plant operations. This comprehensive approach enables a more nuanced understanding of the factors influencing throughput and allows for targeted interventions. The research also explores the challenges associated with implementing multimodal AI in mineral processing plants, including data integration, model interpretability, and scalability. Addressing these challenges is crucial for the successful deployment of AI solutions in real-world industrial settings. To validate the effectiveness of the proposed multimodal AI framework, the research conducts case studies in collaboration with mineral processing plants. The results demonstrate tangible improvements in plant throughput, efficiency, and cost-effectiveness. The paper concludes with insights into the broader implications of implementing multimodal AI in mineral processing and its potential to revolutionize the industry by providing a robust, adaptive, and data-driven approach to optimizing plant operations. In summary, this research contributes to the evolving field of mineral processing by showcasing the transformative potential of multimodal artificial intelligence in enhancing plant throughput. The proposed framework offers a holistic solution that integrates machine learning, computer vision, and natural language processing to address the intricacies of mineral extraction processes, paving the way for a more efficient and sustainable future in the mineral processing industry.Keywords: multimodal AI, computer vision, NLP, mineral processing, mining
Procedia PDF Downloads 747845 Frequent Item Set Mining for Big Data Using MapReduce Framework
Authors: Tamanna Jethava, Rahul Joshi
Abstract:
Frequent Item sets play an essential role in many data Mining tasks that try to find interesting patterns from the database. Typically it refers to a set of items that frequently appear together in transaction dataset. There are several mining algorithm being used for frequent item set mining, yet most do not scale to the type of data we presented with today, so called “BIG DATA”. Big Data is a collection of large data sets. Our approach is to work on the frequent item set mining over the large dataset with scalable and speedy way. Big Data basically works with Map Reduce along with HDFS is used to find out frequent item sets from Big Data on large cluster. This paper focuses on using pre-processing & mining algorithm as hybrid approach for big data over Hadoop platform.Keywords: frequent item set mining, big data, Hadoop, MapReduce
Procedia PDF Downloads 4467844 Business Continuity Risk Review for a Large Petrochemical Complex
Authors: Michel A. Thomet
Abstract:
A discrete-event simulation model was used to perform a Reliability-Availability-Maintainability (RAM) study of a large petrochemical complex which included sixteen process units, and seven feeds and intermediate streams. All the feeds and intermediate streams have associated storage tanks, so that if a processing unit fails and shuts down, the downstream units can keep producing their outputs. This also helps the upstream units which do not have to reduce their outputs, but can store their excess production until the failed unit restart. Each process unit and each pipe section carrying the feeds and intermediate streams has a probability of failure with an associated distribution and a Mean Time Between Failure (MTBF), as well as a distribution of the time to restore and a Mean Time To Restore (MTTR). The utilities supporting the process units can also fail and have their own distributions with specific MTBF and MTTR. The model runs are for ten years or more and the runs are repeated several times to obtain statistically relevant results. One of the main results is the On-Stream factor (OSF) of each process unit (percent of hours in a year when the unit is running in nominal conditions). One of the objectives of the study was to investigate if the storage capacity of each of the feeds and the intermediate stream was adequate. This was done by increasing the storage capacities in several steps and through running the simulation to see if the OSF were improved and by how much. Other objectives were to see if the failure of the utilities were an important factor in the overall OSF, and what could be done to reduce their failure rates through redundant equipment.Keywords: business continuity, on-stream factor, petrochemical, RAM study, simulation, MTBF
Procedia PDF Downloads 2237843 Adversarial Attacks and Defenses on Deep Neural Networks
Authors: Jonathan Sohn
Abstract:
Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning
Procedia PDF Downloads 1987842 Real-Time Monitoring of Complex Multiphase Behavior in a High Pressure and High Temperature Microfluidic Chip
Authors: Renée M. Ripken, Johannes G. E. Gardeniers, Séverine Le Gac
Abstract:
Controlling the multiphase behavior of aqueous biomass mixtures is essential when working in the biomass conversion industry. Here, the vapor/liquid equilibria (VLE) of ethylene glycol, glycerol, and xylitol were studied for temperatures between 25 and 200 °C and pressures of 1 to 10 bar. These experiments were performed in a microfluidic platform, which exhibits excellent heat transfer properties so that equilibrium is reached fast. Firstly, the saturated vapor pressure as a function of the temperature and the substrate mole fraction of the substrate was calculated using AspenPlus with a Redlich-Kwong-Soave Boston-Mathias (RKS-BM) model. Secondly, we developed a high-pressure and high-temperature microfluidic set-up for experimental validation. Furthermore, we have studied the multiphase flow pattern that occurs after the saturation temperature was achieved. A glass-silicon microfluidic device containing a 0.4 or 0.2 m long meandering channel with a depth of 250 μm and a width of 250 or 500 μm was fabricated using standard microfabrication techniques. This device was placed in a dedicated chip-holder, which includes a ceramic heater on the silicon side. The temperature was controlled and monitored by three K-type thermocouples: two were located between the heater and the silicon substrate, one to set the temperature and one to measure it, and the third one was placed in a 300 μm wide and 450 μm deep groove on the glass side to determine the heat loss over the silicon. An adjustable back pressure regulator and a pressure meter were added to control and evaluate the pressure during the experiment. Aqueous biomass solutions (10 wt%) were pumped at a flow rate of 10 μL/min using a syringe pump, and the temperature was slowly increased until the theoretical saturation temperature for the pre-set pressure was reached. First and surprisingly, a significant difference was observed between our theoretical saturation temperature and the experimental results. The experimental values were 10’s of degrees higher than the calculated ones and, in some cases, saturation could not be achieved. This discrepancy can be explained in different ways. Firstly, the pressure in the microchannel is locally higher due to both the thermal expansion of the liquid and the Laplace pressure that has to be overcome before a gas bubble can be formed. Secondly, superheating effects are likely to be present. Next, once saturation was reached, the flow pattern of the gas/liquid multiphase system was recorded. In our device, the point of nucleation can be controlled by taking advantage of the pressure drop across the channel and the accurate control of the temperature. Specifically, a higher temperature resulted in nucleation further upstream in the channel. As the void fraction increases downstream, the flow regime changes along the channel from bubbly flow to Taylor flow and later to annular flow. All three flow regimes were observed simultaneously. The findings of this study are key for the development and optimization of a microreactor for hydrogen production from biomass.Keywords: biomass conversion, high pressure and high temperature microfluidics, multiphase, phase diagrams, superheating
Procedia PDF Downloads 2217841 Advancing Our Understanding of Age-Related Changes in Executive Functions: Insights from Neuroimaging, Genetics and Cognitive Neurosciences
Authors: Yasaman Mohammadi
Abstract:
Executive functions are a critical component of goal-directed behavior, encompassing a diverse set of cognitive processes such as working memory, cognitive flexibility, and inhibitory control. These functions are known to decline with age, but the precise mechanisms underlying this decline remain unclear. This paper provides an in-depth review of recent research investigating age-related changes in executive functions, drawing on insights from neuroimaging, genetics, and cognitive neuroscience. Through an interdisciplinary approach, this paper offers a nuanced understanding of the complex interplay between neural mechanisms, genetic factors, and cognitive processes that contribute to executive function decline in aging. Here, we investigate how different neuroimaging methods, like functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), have helped scientists better understand the brain bases for age-related declines in executive function. Additionally, we discuss the role of genetic factors in mediating individual differences in executive functions across the lifespan, as well as the potential for cognitive interventions to mitigate age-related decline. Overall, this paper presents a comprehensive and integrative view of the current state of knowledge regarding age-related changes in executive functions. It underscores the need for continued interdisciplinary research to fully understand the complex and dynamic nature of executive function decline in aging, with the ultimate goal of developing effective interventions to promote healthy cognitive aging.Keywords: executive functions, aging, neuroimaging, cognitive neuroscience, working memory, cognitive training
Procedia PDF Downloads 747840 Grey Wolf Optimization Technique for Predictive Analysis of Products in E-Commerce: An Adaptive Approach
Authors: Shital Suresh Borse, Vijayalaxmi Kadroli
Abstract:
E-commerce industries nowadays implement the latest AI, ML Techniques to improve their own performance and prediction accuracy. This helps to gain a huge profit from the online market. Ant Colony Optimization, Genetic algorithm, Particle Swarm Optimization, Neural Network & GWO help many e-commerce industries for up-gradation of their predictive performance. These algorithms are providing optimum results in various applications, such as stock price prediction, prediction of drug-target interaction & user ratings of similar products in e-commerce sites, etc. In this study, customer reviews will play an important role in prediction analysis. People showing much interest in buying a lot of services& products suggested by other customers. This ultimately increases net profit. In this work, a convolution neural network (CNN) is proposed which further is useful to optimize the prediction accuracy of an e-commerce website. This method shows that CNN is used to optimize hyperparameters of GWO algorithm using an appropriate coding scheme. Accurate model results are verified by comparing them to PSO results whose hyperparameters have been optimized by CNN in Amazon's customer review dataset. Here, experimental outcome proves that this proposed system using the GWO algorithm achieves superior execution in terms of accuracy, precision, recovery, etc. in prediction analysis compared to the existing systems.Keywords: prediction analysis, e-commerce, machine learning, grey wolf optimization, particle swarm optimization, CNN
Procedia PDF Downloads 1187839 Upgrades for Hydric Supply in Water System Distribution: Use of the Bayesian Network and Technical Expedients
Authors: Elena Carcano, James Ball
Abstract:
This work details the strategies adopted by the Italian Water Utilities during the distribution of water in emergency conditions which glide from earthquakes and droughts to floods and fires. Several water bureaus located over the national territory have been interviewed, and the collected information has been used in a database of potential interventions to be taken. The work discusses the actions adopted by water utilities. These are generally prioritized in order to minimize the social, temporal, and economic burden that the damaged and nearby areas need to support. Actions are defined relying on the Bayesian Network Approach, which constitutes the hard core of any decision support system. The Bayesian Networks give answers to interventions to real and most likely risky cases. The added value of this research consists in supplying the National Bureau, namely Protezione Civile, in charge of managing havoc and catastrophic situations with a univocal plot outline so as to be able to handle actions uniformly at the expense of different local laws or contradictory customs which squander any recovery conditions, proper technical service, and economic aids. The paper is organized as follows: in section 1, the introduction is stated; section 2 provides a brief discussion of BNNs (Bayesian Networks), section 3 introduces the adopted methodology; and in the last sections, results are presented, and conclusions are drawn.Keywords: hierarchical process, strategic plan, water emergency conditions, water supply
Procedia PDF Downloads 1677838 Dental Pathologies and Diet in Pre-hispanic Populations of the Equatorial Pacific Coast: Literature Review
Authors: Ricardo Andrés Márquez Ortiz
Abstract:
Objective. The objective of this literature review is to compile updated information from studies that have addressed the association between dental pathologies and diet in prehistoric populations of the equatorial Pacific coast. Materials and method. The research carried out corresponds to a documentary study of ex post facto retrospective, historiographic and bibliometric design. A bibliographic review search was carried out in the libraries of the Colombian Institute of Anthropology and History (ICANH) and the National University of Colombia for books and articles on the archeology of the region. In addition, a search was carried out in databases and the Internet for books and articles on dental anthropology, archeology and dentistry on the relationship between dental pathologies and diet in prehistoric and current populations from different parts of the world. Conclusions. The complex societies (500 BC - 300 AD) of the equatorial Pacific coast used an agricultural system of intensive monoculture of corn (Zea mays). This form of subsistence was reflected in an intensification of dental pathologies such as dental caries, dental abscesses generated by cavities, and enamel hypoplasia associated with a lower frequency of wear. The Upper Formative period (800 A.D. -16th century A.D.) is characterized by the development of polyculture, slash-and-burn agriculture, as an adaptive agricultural strategy to the ecological damage generated by the intensive economic activity of complex societies. This process leads to a more varied diet, which generates better dental health.Keywords: dental pathologies, nutritional diet, equatorial pacific coast, dental anthropology
Procedia PDF Downloads 537837 Effect of Climate Change on the Genomics of Invasiveness of the Whitefly Bemisia tabaci Species Complex by Estimating the Effective Population Size via a Coalescent Method
Authors: Samia Elfekih, Wee Tek Tay, Karl Gordon, Paul De Barro
Abstract:
Invasive species represent an increasing threat to food biosecurity, causing significant economic losses in agricultural systems. An example is the sweet potato whitefly, Bemisia tabaci, which is a complex of morphologically indistinguishable species causing average annual global damage estimated at US$2.4 billion. The Bemisia complex represents an interesting model for evolutionary studies because of their extensive distribution and potential for invasiveness and population expansion. Within this complex, two species, Middle East-Asia Minor 1 (MEAM1) and Mediterranean (MED) have invaded well beyond their home ranges whereas others, such as Indian Ocean (IO) and Australia (AUS), have not. In order to understand why some Bemisia species have become invasive, genome-wide sequence scans were used to estimate population dynamics over time and relate these to climate. The Bayesian Skyline Plot (BSP) method as implemented in BEAST was used to infer the historical effective population size. In order to overcome sampling bias, the populations were combined based on geographical origin. The datasets used for this particular analysis are genome-wide SNPs (single nucleotide polymorphisms) called separately in each of the following groups: Sub-Saharan Africa (Burkina Faso), Europe (Spain, France, Greece and Croatia), USA (Arizona), Mediterranean-Middle East (Israel, Italy), Middle East-Central Asia (Turkmenistan, Iran) and Reunion Island. The non-invasive ‘AUS’ species endemic to Australia was used as an outgroup. The main findings of this study show that the BSP for the Sub-Saharan African MED population is different from that observed in MED populations from the Mediterranean Basin, suggesting evolution under a different set of environmental conditions. For MED, the effective size of the African (Burkina Faso) population showed a rapid expansion ≈250,000-310,000 years ago (YA), preceded by a period of slower growth. The European MED populations (i.e., Spain, France, Croatia, and Greece) showed a single burst of expansion at ≈160,000-200,000 YA. The MEAM1 populations from Israel and Italy and the ones from Iran and Turkmenistan are similar as they both show the earlier expansion at ≈250,000-300,000 YA. The single IO population lacked the latter expansion but had the earlier one. This pattern is shared with the Sub-Saharan African (Burkina Faso) MED, suggesting IO also faced a similar history of environmental change, which seems plausible given their relatively close geographical distributions. In conclusion, populations within the invasive species MED and MEAM1 exhibited signatures of population expansion lacking in non-invasive species (IO and AUS) during the Pleistocene, a geological epoch marked by repeated climatic oscillations with cycles of glacial and interglacial periods. These expansions strongly suggested the potential of some Bemisia species’ genomes to affect their adaptability and invasiveness.Keywords: whitefly, RADseq, invasive species, SNP, climate change
Procedia PDF Downloads 1287836 Pedestrian Areas, Development Stimulus in Urban Old Fabrics; Analyzing Stroget, Pedestrian Street in Copenhagen
Authors: Kiomars Habibi, Mostafa Behzadfar, Airin Jaberi
Abstract:
Designing appropriate places for the comfort of pedestrians is one of the most important aspects of modern urbanization and renovation and rehabilitation stimulus of urban old fabrics. So, that special cities designed for pedestrians with a complete network of streets without cars, can be considered as one of the best habitations in the world. The number of these cities with a network of streets and squares in which beauty, enjoyment and comfort are mostly concerned for the pedestrians designed regions is increasing around the world, such as Stockholm, Copenhagen, Munich, Frankfurt, Venice, Rome, etc. In this paper, we are going to explain the influential factors regarding the efficiency of these cities by identifying one of the most important pedestrian ways of the world; Strøget is a car free zone in Copenhagen, Denmark. This popular tourist attraction in the center of town is the longest pedestrian shopping area in Europe. Analyses indicate that world-wide experience concerning the renovation and rehabilitation of old fabrics has many advantages in exploiting the idea of pedestrian way for regeneration of old fabrics. Transforming the streets to appropriate places for the comfort of pedestrians, expanding the public spaces such as city squares, and decreasing the masses of building alongside the brought comfort and peace is the main reason in the success of Strøget pedestrian street in urban old fabrics of Copenhagen. Hypothesis: The Strøget pedestrian street has been the development stimulus in Copenhagen and the urban old fabrics development as a resultKeywords: development, stimulus, pedestrian street, urban landscape, Stroget
Procedia PDF Downloads 1167835 Programming without Code: An Approach and Environment to Conditions-On-Data Programming
Authors: Philippe Larvet
Abstract:
This paper presents the concept of an object-based programming language where tests (if... then... else) and control structures (while, repeat, for...) disappear and are replaced by conditions on data. According to the object paradigm, by using this concept, data are still embedded inside objects, as variable-value couples, but object methods are expressed into the form of logical propositions (‘conditions on data’ or COD).For instance : variable1 = value1 AND variable2 > value2 => variable3 = value3. Implementing this approach, a central inference engine turns and examines objects one after another, collecting all CODs of each object. CODs are considered as rules in a rule-based system: the left part of each proposition (left side of the ‘=>‘ sign) is the premise and the right part is the conclusion. So, premises are evaluated and conclusions are fired. Conclusions modify the variable-value couples of the object and the engine goes to examine the next object. The paper develops the principles of writing CODs instead of complex algorithms. Through samples, the paper also presents several hints for implementing a simple mechanism able to process this ‘COD language’. The proposed approach can be used within the context of simulation, process control, industrial systems validation, etc. By writing simple and rigorous conditions on data, instead of using classical and long-to-learn languages, engineers and specialists can easily simulate and validate the functioning of complex systems.Keywords: conditions on data, logical proposition, programming without code, object-oriented programming, system simulation, system validation
Procedia PDF Downloads 226