Search results for: edge computing
1251 Numerical Study of Laminar Separation Bubble Over an Airfoil Using γ-ReθT SST Turbulence Model on Moderate Reynolds Number
Authors: Younes El Khchine
Abstract:
A parametric study has been conducted to analyse the flow around S809 airfoil of a wind turbine in order to better understand the characteristics and effects of laminar separation bubble (LSB) on aerodynamic design for maximizing wind turbine efficiency. Numerical simulations were performed at low Reynolds numbers by solving the Unsteady Reynolds Averaged Navier-Stokes (URANS) equations based on C-type structural mesh and using the γ-Reθt turbulence model. A two-dimensional study was conducted for the chord Reynolds number of 1×105 and angles of attack (AoA) between 0 and 20.15 degrees. The simulation results obtained for the aerodynamic coefficients at various angles of attack (AoA) were compared with XFoil results. A sensitivity study was performed to examine the effects of Reynolds number and free-stream turbulence intensity on the location and length of the laminar separation bubble and the aerodynamic performances of wind turbines. The results show that increasing the Reynolds number leads to a delay in the laminar separation on the upper surface of the airfoil. The increase in Reynolds number leads to an accelerated transition process, and the turbulent reattachment point moves closer to the leading edge owing to an earlier reattachment of the turbulent shear layer. This leads to a considerable reduction in the length of the separation bubble as the Reynolds number is increased. The increase in the level of free-stream turbulence intensity leads to a decrease in separation bubble length and an increase in the lift coefficient while having negligible effects on the stall angle. When the AoA increased, the bubble on the suction airfoil surface was found to move upstream to the leading edge of the airfoil, causing earlier laminar separation.Keywords: laminar separation bubble, turbulence intensity, s809 airfoil, transition model, Reynolds number
Procedia PDF Downloads 701250 Numerical Study of Laminar Separation Bubble Over an Airfoil Using γ-ReθT SST Turbulence Model on Moderate Reynolds Number
Authors: Younes El Khchine, Mohammed Sriti
Abstract:
A parametric study has been conducted to analyse the flow around S809 airfoil of wind turbine in order to better understand the characteristics and effects of laminar separation bubble (LSB) on aerodynamic design for maximizing wind turbine efficiency. Numerical simulations were performed at low Reynolds number by solving the Unsteady Reynolds Averaged Navier-Stokes (URANS) equations based on C-type structural mesh and using γ-Reθt turbulence model. Two-dimensional study was conducted for the chord Reynolds number of 1×105 and angles of attack (AoA) between 0 and 20.15 degrees. The simulation results obtained for the aerodynamic coefficients at various angles of attack (AoA) were compared with XFoil results. A sensitivity study was performed to examine the effects of Reynolds number and free-stream turbulence intensity on the location and length of laminar separation bubble and aerodynamic performances of wind turbine. The results show that increasing the Reynolds number leads to a delay in the laminar separation on the upper surface of the airfoil. The increase in Reynolds number leads to an accelerate transition process and the turbulent reattachment point move closer to the leading edge owing to an earlier reattachment of the turbulent shear layer. This leads to a considerable reduction in the length of the separation bubble as the Reynolds number is increased. The increase of the level of free-stream turbulence intensity leads to a decrease in separation bubble length and an increase the lift coefficient while having negligible effects on the stall angle. When the AoA increased, the bubble on the suction airfoil surface was found to moves upstream to leading edge of the airfoil that causes earlier laminar separation.Keywords: laminar separation bubble, turbulence intensity, S809 airfoil, transition model, Reynolds number
Procedia PDF Downloads 871249 A Trends Analysis of Yatch Simulator
Authors: Jae-Neung Lee, Keun-Chang Kwak
Abstract:
This paper describes an analysis of Yacht Simulator international trends and also explains about Yacht. Examples of yacht Simulator using Yacht Simulator include image processing for totaling the total number of vehicles, edge/target detection, detection and evasion algorithm, image processing using SIFT (scale invariant features transform) matching, and application of median filter and thresholding.Keywords: yacht simulator, simulator, trends analysis, SIFT
Procedia PDF Downloads 4331248 Numerical Investigation of Dynamic Stall over a Wind Turbine Pitching Airfoil by Using OpenFOAM
Authors: Mahbod Seyednia, Shidvash Vakilipour, Mehran Masdari
Abstract:
Computations for two-dimensional flow past a stationary and harmonically pitching wind turbine airfoil at a moderate value of Reynolds number (400000) are carried out by progressively increasing the angle of attack for stationary airfoil and at fixed pitching frequencies for rotary one. The incompressible Navier-Stokes equations in conjunction with Unsteady Reynolds Average Navier-Stokes (URANS) equations for turbulence modeling are solved by OpenFOAM package to investigate the aerodynamic phenomena occurred at stationary and pitching conditions on a NACA 6-series wind turbine airfoil. The aim of this study is to enhance the accuracy of numerical simulation in predicting the aerodynamic behavior of an oscillating airfoil in OpenFOAM. Hence, for turbulence modelling, k-ω-SST with low-Reynolds correction is employed to capture the unsteady phenomena occurred in stationary and oscillating motion of the airfoil. Using aerodynamic and pressure coefficients along with flow patterns, the unsteady aerodynamics at pre-, near-, and post-static stall regions are analyzed in harmonically pitching airfoil, and the results are validated with the corresponding experimental data possessed by the authors. The results indicate that implementing the mentioned turbulence model leads to accurate prediction of the angle of static stall for stationary airfoil and flow separation, dynamic stall phenomenon, and reattachment of the flow on the surface of airfoil for pitching one. Due to the geometry of the studied 6-series airfoil, the vortex on the upper surface of the airfoil during upstrokes is formed at the trailing edge. Therefore, the pattern flow obtained by our numerical simulations represents the formation and change of the trailing-edge vortex at near- and post-stall regions where this process determines the dynamic stall phenomenon.Keywords: CFD, moderate Reynolds number, OpenFOAM, pitching oscillation, unsteady aerodynamics, wind turbine
Procedia PDF Downloads 2041247 The Impact of Artificial Intelligence on Food Nutrition
Authors: Antonyous Fawzy Boshra Girgis
Abstract:
Nutrition labels are diet-related health policies. They help individuals improve food-choice decisions and reduce intake of calories and unhealthy food elements, like cholesterol. However, many individuals do not pay attention to nutrition labels or fail to appropriately understand them. According to the literature, thinking and cognitive styles can have significant effects on attention to nutrition labels. According to the author's knowledge, the effect of global/local processing on attention to nutrition labels has not been previously studied. Global/local processing encourages individuals to attend to the whole/specific parts of an object and can have a significant impact on people's visual attention. In this study, this effect was examined with an experimental design using the eye-tracking technique. The research hypothesis was that individuals with local processing would pay more attention to nutrition labels, including nutrition tables and traffic lights. An experiment was designed with two conditions: global and local information processing. Forty participants were randomly assigned to either global or local conditions, and their processing style was manipulated accordingly. Results supported the hypothesis for nutrition tables but not for traffic lights.Keywords: nutrition, public health, SA Harvest, foodeye-tracking, nutrition labelling, global/local information processing, individual differencesmobile computing, cloud computing, nutrition label use, nutrition management, barcode scanning
Procedia PDF Downloads 431246 Creating Energy Sustainability in an Enterprise
Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala
Abstract:
As we enter the new era of Artificial Intelligence (AI) and Cloud Computing, we mostly rely on the Machine and Natural Language Processing capabilities of AI, and Energy Efficient Hardware and Software Devices in almost every industry sector. In these industry sectors, much emphasis is on developing new and innovative methods for producing and conserving energy and sustaining the depletion of natural resources. The core pillars of sustainability are economic, environmental, and social, which is also informally referred to as the 3 P's (People, Planet and Profits). The 3 P's play a vital role in creating a core Sustainability Model in the Enterprise. Natural resources are continually being depleted, so there is more focus and growing demand for renewable energy. With this growing demand, there is also a growing concern in many industries on how to reduce carbon emissions and conserve natural resources while adopting sustainability in corporate business models and policies. In our paper, we would like to discuss the driving forces such as Climate changes, Natural Disasters, Pandemic, Disruptive Technologies, Corporate Policies, Scaled Business Models and Emerging social media and AI platforms that influence the 3 main pillars of Sustainability (3P’s). Through this paper, we would like to bring an overall perspective on enterprise strategies and the primary focus on bringing cultural shifts in adapting energy-efficient operational models. Overall, many industries across the globe are incorporating core sustainability principles such as reducing energy costs, reducing greenhouse gas (GHG) emissions, reducing waste and increasing recycling, adopting advanced monitoring and metering infrastructure, reducing server footprint and compute resources (Shared IT services, Cloud computing, and Application Modernization) with the vision for a sustainable environment.Keywords: climate change, pandemic, disruptive technology, government policies, business model, machine learning and natural language processing, AI, social media platform, cloud computing, advanced monitoring, metering infrastructure
Procedia PDF Downloads 1121245 e-Learning Security: A Distributed Incident Response Generator
Authors: Bel G Raggad
Abstract:
An e-Learning setting is a distributed computing environment where information resources can be connected to any public network. Public networks are very unsecure which can compromise the reliability of an e-Learning environment. This study is only concerned with the intrusion detection aspect of e-Learning security and how incident responses are planned. The literature reported great advances in intrusion detection system (ids) but neglected to study an important ids weakness: suspected events are detected but an intrusion is not determined because it is not defined in ids databases. We propose an incident response generator (DIRG) that produces incident responses when the working ids system suspects an event that does not correspond to a known intrusion. Data involved in intrusion detection when ample uncertainty is present is often not suitable to formal statistical models including Bayesian. We instead adopt Dempster and Shafer theory to process intrusion data for the unknown event. The DIRG engine transforms data into a belief structure using incident scenarios deduced by the security administrator. Belief values associated with various incident scenarios are then derived and evaluated to choose the most appropriate scenario for which an automatic incident response is generated. This article provides a numerical example demonstrating the working of the DIRG system.Keywords: decision support system, distributed computing, e-Learning security, incident response, intrusion detection, security risk, statefull inspection
Procedia PDF Downloads 4381244 Innovation in PhD Training in the Interdisciplinary Research Institute
Authors: B. Shaw, K. Doherty
Abstract:
The Cultural Communication and Computing Research Institute (C3RI) is a diverse multidisciplinary research institute including art, design, media production, communication studies, computing and engineering. Across these disciplines it can seem like there are enormous differences of research practice and convention, including differing positions on objectivity and subjectivity, certainty and evidence, and different political and ethical parameters. These differences sit within, often unacknowledged, histories, codes, and communication styles of specific disciplines, and it is all these aspects that can make understanding of research practice across disciplines difficult. To explore this, a one day event was orchestrated, testing how a PhD community might communicate and share research in progress in a multi-disciplinary context. Instead of presenting results at a conference, research students were tasked to articulate their method of inquiry. A working party of students from across disciplines had to design a conference call, visual identity and an event framework that would work for students across all disciplines. The process of establishing the shape and identity of the conference was revealing. Even finding a linguistic frame that would meet the expectations of different disciplines for the conference call was challenging. The first abstracts submitted either resorted to reporting findings, or only described method briefly. It took several weeks of supported intervention for research students to get ‘inside’ their method and to understand their research practice as a process rich with philosophical and practical decisions and implications. In response to the abstracts the conference committee generated key methodological categories for conference sessions, including sampling, capturing ‘experience’, ‘making models’, researcher identities, and ‘constructing data’. Each session involved presentations by visual artists, communications students and computing researchers with inter-disciplinary dialogue, facilitated by alumni Chairs. The apparently simple focus on method illuminated research process as a site of creativity, innovation and discovery, and also built epistemological awareness, drawing attention to what is being researched and how it can be known. It was surprisingly difficult to limit students to discussing method, and it was apparent that the vocabulary available for method is sometimes limited. However, by focusing on method rather than results, the genuine process of research, rather than one constructed for approval, could be captured. In unlocking the twists and turns of planning and implementing research, and the impact of circumstance and contingency, students had to reflect frankly on successes and failures. This level of self – and public- critique emphasised the degree of critical thinking and rigour required in executing research and demonstrated that honest reportage of research, faults and all, is good valid research. The process also revealed the degree that disciplines can learn from each other- the computing students gained insights from the sensitive social contextualizing generated by communications and art and design students, and art and design students gained understanding from the greater ‘distance’ and emphasis on application that computing students applied to their subjects. Finding the means to develop dialogue across disciplines makes researchers better equipped to devise and tackle research problems across disciplines, potentially laying the ground for more effective collaboration.Keywords: interdisciplinary, method, research student, training
Procedia PDF Downloads 2071243 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1691242 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanismsKeywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1601241 Flow-Control Effectiveness of Convergent Surface Indentations on an Aerofoil at Low Reynolds Numbers
Authors: Neel K. Shah
Abstract:
Passive flow control on aerofoils has largely been achieved through the use of protrusions such as vane-type vortex generators. Consequently, innovative flow-control concepts should be explored in an effort to improve current component performance. Therefore, experimental research has been performed at The University of Manchester to evaluate the flow-control effectiveness of a vortex generator made in the form of a surface indentation. The surface indentation has a trapezoidal planform. A spanwise array of indentations has been applied in a convergent orientation around the maximum-thickness location of the upper surface of a NACA-0015 aerofoil. The aerofoil has been tested in a two-dimensional set-up in a low-speed wind tunnel at an angle of attack (AoA) of 3° and a chord-based Reynolds number (Re) of ~2.7 x 105. The baseline model has been found to suffer from a laminar separation bubble at low AoA. The application of the indentations at 3° AoA has considerably shortened the separation bubble. The indentations achieve this by shedding up-flow pairs of streamwise vortices. Despite the considerable reduction in bubble length, the increase in leading-edge suction due to the shorter bubble is limited by the removal of surface curvature and blockage (increase in surface pressure) caused locally by the convergent indentations. Furthermore, the up-flow region of the vortices, which locally weakens the pressure recovery around the trailing edge of the aerofoil by thickening the boundary layer, also contributes to this limitation. Due to the conflicting effects of the indentations, the changes in the pressure-lift and pressure-drag coefficients, i.e., cl,p and cd,p, are small. Nevertheless, the indentations have improved cl,p and cd,p beyond the uncertainty range, i.e., by ~1.30% and ~0.30%, respectively, at 3° AoA. The wake measurements show that turbulence intensity and Reynolds stresses have considerably increased in the indented case, thus implying that the indentations increase the viscous drag on the model. In summary, the convergent indentations are able to reduce the size of the laminar separation bubble, but conversely, they are not highly effective in reducing cd,p at the tested Reynolds number.Keywords: aerofoil flow control, laminar separation bubbles, low Reynolds-number flows, surface indentations
Procedia PDF Downloads 2271240 Robust Recognition of Locomotion Patterns via Data-Driven Machine Learning in the Cloud Environment
Authors: Shinoy Vengaramkode Bhaskaran, Kaushik Sathupadi, Sandesh Achar
Abstract:
Human locomotion recognition is important in a variety of sectors, such as robotics, security, healthcare, fitness tracking and cloud computing. With the increasing pervasiveness of peripheral devices, particularly Inertial Measurement Units (IMUs) sensors, researchers have attempted to exploit these advancements in order to precisely and efficiently identify and categorize human activities. This research paper introduces a state-of-the-art methodology for the recognition of human locomotion patterns in a cloud environment. The methodology is based on a publicly available benchmark dataset. The investigation implements a denoising and windowing strategy to deal with the unprocessed data. Next, feature extraction is adopted to abstract the main cues from the data. The SelectKBest strategy is used to abstract optimal features from the data. Furthermore, state-of-the-art ML classifiers are used to evaluate the performance of the system, including logistic regression, random forest, gradient boosting and SVM have been investigated to accomplish precise locomotion classification. Finally, a detailed comparative analysis of results is presented to reveal the performance of recognition models.Keywords: artificial intelligence, cloud computing, IoT, human locomotion, gradient boosting, random forest, neural networks, body-worn sensors
Procedia PDF Downloads 131239 Mapping and Database on Mass Movements along the Eastern Edge of the East African Rift in Burundi
Authors: L. Nahimana
Abstract:
The eastern edge of the East African Rift in Burundi shows many mass movement phenomena corresponding to landslides, mudflow, debris flow, spectacular erosion (mega-gully), flash floods and alluvial deposits. These phenomena usually occur during the rainy season. Their extent and consecutive damages vary widely. To manage these phenomena, it is necessary to adopt a methodological approach of their mapping with a structured database. The elements for this database are: three-dimensional extent of the phenomenon, natural causes and conditions (geological lithology, slope, weathering depth and products, rainfall patterns, natural environment) and the anthropogenic factors corresponding to the various human activities. The extent of the area provides information about the possibilities and opportunities for mitigation technique. The lithological nature allows understanding the influence of the nature of the rock and its structure on the intensity of the weathering of rocks, as well as the geotechnical properties of the weathering products. The slope influences the land stability. The intensity of annual, monthly and daily rainfall helps to understand the conditions of water saturation of the terrains. Certain natural circumstances such as the presence of streams and rivers promote foot slope erosion and thus the occurrence and activity of mass movements. The construction of some infrastructures such as new roads and agglomerations deeply modify the flow of surface and underground water followed by mass movements. Using geospatial data selected on the East African Rift in Burundi, it is presented case of mass movements illustrating the nature, importance, various factors and the extent of the damages. An analysis of these elements for each hazard can guide the options for mitigation of the phenomenon and its consequences.Keywords: mass movement, landslide, mudflow, debris flow, spectacular erosion, mega-gully, flash flood, alluvial deposit, East African rift, Burundi
Procedia PDF Downloads 3071238 Coordinated Multi-Point Scheme Based on Channel State Information in MIMO-OFDM System
Authors: Su-Hyun Jung, Chang-Bin Ha, Hyoung-Kyu Song
Abstract:
Recently, increasing the quality of experience (QoE) is an important issue. Since performance degradation at cell edge extremely reduces the QoE, several techniques are defined at LTE/LTE-A standard to remove inter-cell interference (ICI). However, the conventional techniques have disadvantage because there is a trade-off between resource allocation and reliable communication. The proposed scheme reduces the ICI more efficiently by using channel state information (CSI) smartly. It is shown that the proposed scheme can reduce the ICI with less resources.Keywords: adaptive beamforming, CoMP, LTE-A, ICI reduction
Procedia PDF Downloads 4701237 Hybrid Genetic Approach for Solving Economic Dispatch Problems with Valve-Point Effect
Authors: Mohamed I. Mahrous, Mohamed G. Ashmawy
Abstract:
Hybrid genetic algorithm (HGA) is proposed in this paper to determine the economic scheduling of electric power generation over a fixed time period under various system and operational constraints. The proposed technique can outperform conventional genetic algorithms (CGAs) in the sense that HGA make it possible to improve both the quality of the solution and reduce the computing expenses. In contrast, any carefully designed GA is only able to balance the exploration and the exploitation of the search effort, which means that an increase in the accuracy of a solution can only occure at the sacrifice of convergent speed, and vice visa. It is unlikely that both of them can be improved simultaneously. The proposed hybrid scheme is developed in such a way that a simple GA is acting as a base level search, which makes a quick decision to direct the search towards the optimal region, and a local search method (pattern search technique) is next employed to do the fine tuning. The aim of the strategy is to achieve the cost reduction within a reasonable computing time. The effectiveness of the proposed hybrid technique is verified on two real public electricity supply systems with 13 and 40 generator units respectively. The simulation results obtained with the HGA for the two real systems are very encouraging with regard to the computational expenses and the cost reduction of power generation.Keywords: genetic algorithms, economic dispatch, pattern search
Procedia PDF Downloads 4451236 An Analysis of Innovative Cloud Model as Bridging the Gap between Physical and Virtualized Business Environments: The Customer Perspective
Authors: Asim Majeed, Rehan Bhana, Mak Sharma, Rebecca Goode, Nizam Bolia, Mike Lloyd-Williams
Abstract:
This study aims to investigate and explore the underlying causes of security concerns of customers emerged when WHSmith transformed its physical system to virtualized business model through NetSuite. NetSuite is essentially fully integrated software which helps transforming the physical system to virtualized business model. Modern organisations are moving away from traditional business models to cloud based models and consequently it is expected to have a better, secure and innovative environment for customers. The vital issue of the modern age race is the security when transforming virtualized through cloud based models and designers of interactive systems often misunderstand privacy and even often ignore it, thus causing concerns for users. The content analysis approach is being used to collect the qualitative data from 120 online bloggers including TRUSTPILOT. The results and finding provide useful new insights into the nature and form of security concerns of online users after they have used the WHSmith services offered online through their website. Findings have theoretical as well as practical implications for the successful adoption of cloud computing Business-to-Business model and similar systems.Keywords: innovation, virtualization, cloud computing, organizational flexibility
Procedia PDF Downloads 3841235 The Tectonic Transition from the Paleo-Asian Ocean to the Paleo-Pacific Ocean in Northeastern Eurasia: Constraints from the Early Mesozoic Volcanic-Sedimentary Rocks
Authors: Hong-Yan Wang, Jian-Bo Zhou, Simon A. Wilde
Abstract:
Since the Paleozoic, the tectonic evolution of northeastern Eurasia has been dominated by the Paleo-Asian Ocean and the Paleo-Pacific Ocean tectonic domains. However, the spatiotemporal framework and the timing of the tectonic transition between these two oceanic domains remain enigmatic. To address this issue, this study investigated the petrological, geochronological, and geochemical characters of chlorite schist, andesite and sandstone along the convergent margin (Jilin-Yanji Suture) between the Northeast China terranes and the North China craton in central Jilin Province, China. The results show that the protoliths of chlorite schists and the andesite samples are high-Mg andesites formed in the Early Triassic (249 ± 3 Ma - 246 ± 4 Ma), and their magma source was produced by the metasomatized mantle wedge by subducted slab-derived fluids in a continental island arc setting. The sandstones are immature graywackes with a maximum depositional age of Early Triassic (248 ± 1 Ma), they were formed in a sedimentary basin (most likely an intra-arc basin) intimately associated with one or more continental arcs along the northeastern edge of the North China craton and their sediments were largely derived from coeval magmatic rocks in a juvenile continental arc. Based on the new results and the field investigation, this study suggests that the continental arcs along the northeastern edge of the North China Craton were caused by the southwestward subduction of the Jilin-Heilongjiang Ocean during the early Mesozoic. There is a distinct temporal gap between the closure of the Paleo-Asian Ocean (ca. 260 Ma) and the onset of Paleo-Pacific plate subduction (234–220 Ma), which is essentially coeval with the southwestward subduction of the Jilin-Heilongjiang Ocean between 256 Ma and 239 Ma, meaning the latter is a key link that marks the transition between these two tectonic domains.Keywords: intra-arc basin, high-Mg andesites, early Mesozoic, Jilin-Heilongjiang Ocean, tectonic transition in northeastern Eurasia
Procedia PDF Downloads 51234 The Possible Double-Edged Sword Effects of Online Learning on Academic Performance: A Quantitative Study of Preclinical Medical Students
Authors: Atiwit Sinyoo, Sekh Thanprasertsuk, Sithiporn Agthong, Pasakorn Watanatada, Shaun Peter Qureshi, Saknan Bongsebandhu-Phubhakdi
Abstract:
Background: Since the SARS-CoV-2 virus became extensively disseminated throughout the world, online learning has become one of the most hotly debated topics in educational reform. While some studies have already shown the advantage of online learning, there are still questions concerning how online learning affects students’ learning behavior and academic achievement when each student learns in a different way. Hence, we aimed to develop a guide for preclinical medical students to avoid drawbacks and get benefits from online learning that possibly a double-edged sword. Methods: We used a multiple-choice questionnaire to evaluate the learning behavior of second-year Thai medical students in the neuroscience course. All traditional face-to-face lecture classes were video-recorded and promptly posted to the online learning platform throughout this course. Students could pick and choose whatever classes they wanted to attend, and they may use online learning as often as they wished. Academic performance was evaluated as summative score, spot exam score and pre-post-test improvement. Results: More frequently students used online learning platform, the less they attended lecture classes (P = 0.035). High proactive online learners (High PO) who were irregular attendee (IrA) had significantly lower summative scores (P = 0.026), spot exam score (P = 0.012) and pre-post-test improvement (P = 0.036). In the meanwhile, conditional attendees (CoA) who only attended classes with attendance check had significantly higher summative score (P = 0.025) and spot exam score (P = 0.001) if they were in the High PO group. Conclusions: The benefit and drawbacks edges of using an online learning platform were demonstrated in our research. Based on this double-edged sword effect, we believe that online learning is a valuable learning strategy, but students must carefully plan their study schedule to gain the “benefit edge” meanwhile avoiding its “drawback edge”.Keywords: academic performance, assessment, attendance, online learning, preclinical medical students
Procedia PDF Downloads 1611233 Design and Simulation of Low Threshold Nanowire Photonic Crystal Surface Emitting Lasers
Authors: Balthazar Temu, Zhao Yan, Bogdan-Petrin Ratiu, Sang Soon Oh, Qiang Li
Abstract:
Nanowire based Photonic Crystal Surface Emitting Lasers (PCSELs) reported in the literature have been designed using a triangular, square or honeycomb patterns. The triangular and square pattern PCSELs have limited degrees of freedom in tuning the design parameters which hinders the ability to design high quality factor (Q-factor) devices. Nanowire based PCSELs designed using triangular and square patterns have been reported with the lasing thresholds of 130 kW/〖cm〗^2 and 7 kW/〖cm〗^2 respectively. On the other hand the honeycomb pattern gives more degrees of freedom in tuning the design parameters, which can allow one to design high Q-factor devices. A deformed honeycomb pattern device was reported with lasing threshold of 6.25 W/〖cm〗^2 corresponding to a simulated Q-factor of 5.84X〖10〗^5.Despite this achievement, the design principles which can lead to realization of even higher Q-factor honeycomb pattern PCSELs have not yet been investigated. In this work we show that through deforming the honeycomb pattern and tuning the heigh and lattice constants of the nanowires, it is possible to achieve even higher Q-factor devices. Considering three different band edge modes, we investigate how the resonance wavelength changes as the device is deformed, which is useful in designing high Q-factor devices in different wavelength bands. We eventually establish the design and simulation of honeycomb PCSELs operating around the wavelength of 960nm , in the O and the C band with Q-factors up to 7X〖10〗^7. We also investigate the Q-factors of undeformed device, and establish that the mode at the band edge close to 960nm can attain highest Q-factor of all the modes when the device is undeformed and the Q-factor degrades as the device is deformed. This work is a stepping stone towards the fabrication of very high Q-factor, nanowire based honey comb PCSELs, which are expected to have very low lasing threshold.Keywords: designing nanowire PCSEL, designing PCSEL on silicon substrates, low threshold nanowire laser, simulation of photonic crystal lasers
Procedia PDF Downloads 131232 Comparative Settlement Analysis on the under of Embankment with Empirical Formulas and Settlement Plate Measurement for Reducing Building Crack around of Embankments
Authors: Safitri Nur Wulandari, M. Ivan Adi Perdana, Prathisto L. Panuntun Unggul, R. Dary Wira Mahadika
Abstract:
In road construction on the soft soil, we need a soil improvement method to improve the soil bearing capacity of the land base so that the soil can withstand the traffic loads. Most of the land in Indonesia has a soft soil, where soft soil is a type of clay that has the consistency of very soft to medium stiff, undrained shear strength, Cu <0:25 kg/cm2, or the estimated value of NSPT <5 blows/ft. This study focuses on the analysis of the effect on preloading load (embarkment) to the amount of settlement ratio on the under of embarkment that will impact on the building cracks around of embarkment. The method used in this research is a superposition method for embarkment distribution on 27 locations with undisturbed soil samples at some borehole point in Java and Kalimantan, Indonesia. Then correlating the results of settlement plate monitoring on the field with Asaoka method. The results of settlement plate monitoring taken from an embarkment of Ahmad Yani airport in Semarang on 32 points. Where the value of Cc (index compressible) soil data based on some laboratory test results, while the value of Cc is not tested obtained from empirical formula Ardhana and Mochtar, 1999. From this research, the results of the field monitoring showed almost the same results with an empirical formulation with the standard deviation of 4% where the formulation of the empirical results of this analysis obtained by linear formula. Value empirical linear formula is to determine the effect of compression heap area as high as 4,25 m is 3,1209x + y = 0.0026 for the slope of the embankment 1: 8 for the same analysis with an initial height of embankment on the field. Provided that at the edge of the embankment settlement worth is not equal to 0 but at a quarter of embankment has a settlement ratio average 0.951 and at the edge of embankment has a settlement ratio 0,049. The influence areas around of embankment are approximately 1 meter for slope 1:8 and 7 meters for slope 1:2. So, it can cause the building cracks, to build in sustainable development.Keywords: building cracks, influence area, settlement plate, soft soil, empirical formula, embankment
Procedia PDF Downloads 3451231 Analyzing the Street Pattern Characteristics on Young People’s Choice to Walk or Not: A Study Based on Accelerometer and Global Positioning Systems Data
Authors: Ebru Cubukcu, Gozde Eksioglu Cetintahra, Burcin Hepguzel Hatip, Mert Cubukcu
Abstract:
Obesity and overweight cause serious health problems. Public and private organizations aim to encourage walking in various ways in order to cope with the problem of obesity and overweight. This study aims to understand how the spatial characteristics of urban street pattern, connectivity and complexity influence young people’s choice to walk or not. 185 public university students in Izmir, the third largest city in Turkey, participated in the study. Each participant had worn an accelerometer and a global positioning (GPS) device for a week. The accelerometer device records data on the intensity of the participant’s activity at a specified time interval, and the GPS device on the activities’ locations. Combining the two datasets, activity maps are derived. These maps are then used to differentiate the participants’ walk trips and motor vehicle trips. Given that, the frequency of walk and motor vehicle trips are calculated at the street segment level, and the street segments are then categorized into two as ‘preferred by pedestrians’ and ‘preferred by motor vehicles’. Graph Theory-based accessibility indices are calculated to quantify the spatial characteristics of the streets in the sample. Six different indices are used: (I) edge density, (II) edge sinuosity, (III) eta index, (IV) node density, (V) order of a node, and (VI) beta index. T-tests show that the index values for the ‘preferred by pedestrians’ and ‘preferred by motor vehicles’ are significantly different. The findings indicate that the spatial characteristics of the street network have a measurable effect on young people’s choice to walk or not. Policy implications are discussed. This study is funded by the Scientific and Technological Research Council of Turkey, Project No: 116K358.Keywords: graph theory, walkability, accessibility, street network
Procedia PDF Downloads 2281230 Development of a Microfluidic Device for Low-Volume Sample Lysis
Authors: Abbas Ali Husseini, Ali Mohammad Yazdani, Fatemeh Ghadiri, Alper Şişman
Abstract:
We developed a microchip device that uses surface acoustic waves for rapid lysis of low level of cell samples. The device incorporates sharp-edge glass microparticles for improved performance. We optimized the lysis conditions for high efficiency and evaluated the device's feasibility for point-of-care applications. The microchip contains a 13-finger pair interdigital transducer with a 30-degree focused angle. It generates high-intensity acoustic beams that converge 6 mm away. The microchip operates at a frequency of 16 MHz, exciting Rayleigh waves with a 250 µm wavelength on the LiNbO3 substrate. Cell lysis occurs when Candida albicans cells and glass particles are placed within the focal area. The high-intensity surface acoustic waves induce centrifugal forces on the cells and glass particles, resulting in cell lysis through lateral forces from the sharp-edge glass particles. We conducted 42 pilot cell lysis experiments to optimize the surface acoustic wave-induced streaming. We varied electrical power, droplet volume, glass particle size, concentration, and lysis time. A regression machine-learning model determined the impact of each parameter on lysis efficiency. Based on these findings, we predicted optimal conditions: electrical signal of 2.5 W, sample volume of 20 µl, glass particle size below 10 µm, concentration of 0.2 µg, and a 5-minute lysis period. Downstream analysis successfully amplified a DNA target fragment directly from the lysate. The study presents an efficient microchip-based cell lysis method employing acoustic streaming and microparticle collisions within microdroplets. Integration of a surface acoustic wave-based lysis chip with an isothermal amplification method enables swift point-of-care applications.Keywords: cell lysis, surface acoustic wave, micro-glass particle, droplet
Procedia PDF Downloads 791229 Optimizing the Location of Parking Areas Adapted for Dangerous Goods in the European Road Transport Network
Authors: María Dolores Caro, Eugenio M. Fedriani, Ángel F. Tenorio
Abstract:
The transportation of dangerous goods by lorries throughout Europe must be done by using the roads conforming the European Road Transport Network. In this network, there are several parking areas where lorry drivers can park to rest according to the regulations. According to the "European Agreement concerning the International Carriage of Dangerous Goods by Road", parking areas where lorries transporting dangerous goods can park to rest, must follow several security stipulations to keep safe the rest of road users. At this respect, these lorries must be parked in adapted areas with strict and permanent surveillance measures. Moreover, drivers must satisfy several restrictions about resting and driving time. Under these facts, one may expect that there exist enough parking areas for the transport of this type of goods in order to obey the regulations prescribed by the European Union and its member countries. However, the already-existing parking areas are not sufficient to cover all the stops required by drivers transporting dangerous goods. Our main goal is, starting from the already-existing parking areas and the loading-and-unloading location, to provide an optimal answer to the following question: how many additional parking areas must be built and where must they be located to assure that lorry drivers can transport dangerous goods following all the stipulations about security and safety for their stops? The sense of the word “optimal” is due to the fact that we give a global solution for the location of parking areas throughout the whole European Road Transport Network, adjusting the number of additional areas to be as lower as possible. To do so, we have modeled the problem using graph theory since we are working with a road network. As nodes, we have considered the locations of each already-existing parking area, each loading-and-unloading area each road bifurcation. Each road connecting two nodes is considered as an edge in the graph whose weight corresponds to the distance between both nodes in the edge. By applying a new efficient algorithm, we have found the additional nodes for the network representing the new parking areas adapted for dangerous goods, under the fact that the distance between two parking areas must be less than or equal to 400 km.Keywords: trans-european transport network, dangerous goods, parking areas, graph-based modeling
Procedia PDF Downloads 2811228 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality
Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan
Abstract:
Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application
Procedia PDF Downloads 761227 Calibration of Mini TEPC and Measurement of Lineal Energy in a Mixed Radiation Field Produced by Neutrons
Authors: I. C. Cho, W. H. Wen, H. Y. Tsai, T. C. Chao, C. J. Tung
Abstract:
Tissue-equivalent proportional counter (TEPC) is a useful instrument used to measure radiation single-event energy depositions in a subcellular target volume. The quantity of measurements is the microdosimetric lineal energy, which determines the relative biological effectiveness, RBE, for radiation therapy or the radiation-weighting factor, WR, for radiation protection. TEPC is generally used in a mixed radiation field, where each component radiation has its own RBE or WR value. To reduce the pile-up effect during radiotherapy measurements, a miniature TEPC (mini TEPC) with cavity size in the order of 1 mm may be required. In the present work, a homemade mini TEPC with a cylindrical cavity of 1 mm in both the diameter and the height was constructed to measure the lineal energy spectrum of a mixed radiation field with high- and low-LET radiations. Instead of using external radiation beams to penetrate the detector wall, mixed radiation fields were produced by the interactions of neutrons with TEPC walls that contained small plugs of different materials, i.e. Li, B, A150, Cd and N. In all measurements, mini TEPC was placed at the beam port of the Tsing Hua Open-pool Reactor (THOR). Measurements were performed using the propane-based tissue-equivalent gas mixture, i.e. 55% C3H8, 39.6% CO2 and 5.4% N2 by partial pressures. The gas pressure of 422 torr was applied for the simulation of a 1 m diameter biological site. The calibration of mini TEPC was performed using two marking points in the lineal energy spectrum, i.e. proton edge and electron edge. Measured spectra revealed high lineal energy (> 100 keV/m) peaks due to neutron-capture products, medium lineal energy (10 – 100 keV/m) peaks from hydrogen-recoil protons, and low lineal energy (< 10 keV/m) peaks of reactor photons. For cases of Li and B plugs, the high lineal energy peaks were quite prominent. The medium lineal energy peaks were in the decreasing order of Li, Cd, N, A150, and B. The low lineal energy peaks were smaller compared to other peaks. This study demonstrated that internally produced mixed radiations from the interactions of neutrons with different plugs in the TEPC wall provided a useful approach for TEPC measurements of lineal energies.Keywords: TEPC, lineal energy, microdosimetry, radiation quality
Procedia PDF Downloads 4701226 Soft Computing Approach for Diagnosis of Lassa Fever
Authors: Roseline Oghogho Osaseri, Osaseri E. I.
Abstract:
Lassa fever is an epidemic hemorrhagic fever caused by the Lassa virus, an extremely virulent arena virus. This highly fatal disorder kills 10% to 50% of its victims, but those who survive its early stages usually recover and acquire immunity to secondary attacks. One of the major challenges in giving proper treatment is lack of fast and accurate diagnosis of the disease due to multiplicity of symptoms associated with the disease which could be similar to other clinical conditions and makes it difficult to diagnose early. This paper proposed an Adaptive Neuro Fuzzy Inference System (ANFIS) for the prediction of Lass Fever. In the design of the diagnostic system, four main attributes were considered as the input parameters and one output parameter for the system. The input parameters are Temperature on admission (TA), White Blood Count (WBC), Proteinuria (P) and Abdominal Pain (AP). Sixty-one percent of the datasets were used in training the system while fifty-nine used in testing. Experimental results from this study gave a reliable and accurate prediction of Lassa fever when compared with clinically confirmed cases. In this study, we have proposed Lassa fever diagnostic system to aid surgeons and medical healthcare practictionals in health care facilities who do not have ready access to Polymerase Chain Reaction (PCR) diagnosis to predict possible Lassa fever infection.Keywords: anfis, lassa fever, medical diagnosis, soft computing
Procedia PDF Downloads 2711225 A Fast Parallel and Distributed Type-2 Fuzzy Algorithm Based on Cooperative Mobile Agents Model for High Performance Image Processing
Authors: Fatéma Zahra Benchara, Mohamed Youssfi, Omar Bouattane, Hassan Ouajji, Mohamed Ouadi Bensalah
Abstract:
The aim of this paper is to present a distributed implementation of the Type-2 Fuzzy algorithm in a parallel and distributed computing environment based on mobile agents. The proposed algorithm is assigned to be implemented on a SPMD (Single Program Multiple Data) architecture which is based on cooperative mobile agents as AVPE (Agent Virtual Processing Element) model in order to improve the processing resources needed for performing the big data image segmentation. In this work we focused on the application of this algorithm in order to process the big data MRI (Magnetic Resonance Images) image of size (n x m). It is encapsulated on the Mobile agent team leader in order to be split into (m x n) pixels one per AVPE. Each AVPE perform and exchange the segmentation results and maintain asynchronous communication with their team leader until the convergence of this algorithm. Some interesting experimental results are obtained in terms of accuracy and efficiency analysis of the proposed implementation, thanks to the mobile agents several interesting skills introduced in this distributed computational model.Keywords: distributed type-2 fuzzy algorithm, image processing, mobile agents, parallel and distributed computing
Procedia PDF Downloads 4291224 Integrated Teaching of Hardware Courses for the Undergraduates of Computer Science and Engineering to Attain Focused Outcomes
Authors: Namrata D. Hiremath, Mahalaxmi Bhille, P. G. Sunitha Hiremath
Abstract:
Computer systems play an integral role in all facets of the engineering profession. This calls for an understanding of the processor-level components of computer systems, their design and operation, and their impact on the overall performance of the systems. Systems users are always in need of faster, more powerful, yet cheaper computer systems. The focus of Computer Science engineering graduates is inclined towards software oriented base. To be an efficient programmer there is a need to understand the role of hardware architecture towards the same. It is essential for the students of Computer Science and Engineering to know the basic building blocks of any computing device and how the digital principles can be used to build them. Hence two courses Digital Electronics of 3 credits, which is associated with lab of 1.5 credits and Computer Organization of 5 credits, were introduced at the sophomore level. Activity was introduced with the objective to teach the hardware concepts to the students of Computer science engineering through structured lab. The students were asked to design and implement a component of a computing device using MultiSim simulation tool and build the same using hardware components. The experience of the activity helped the students to understand the real time applications of the SSI and MSI components. The impact of the activity was evaluated and the performance was measured. The paper explains the achievement of the ABET outcomes a, c and k.Keywords: digital, computer organization, ABET, structured enquiry, course activity
Procedia PDF Downloads 5011223 Analysis of Plates with Varying Rigidities Using Finite Element Method
Authors: Karan Modi, Rajesh Kumar, Jyoti Katiyar, Shreya Thusoo
Abstract:
This paper presents Finite Element Method (FEM) for analyzing the internal responses generated in thin rectangular plates with various edge conditions and rigidity conditions. Comparison has been made between the FEM (ANSYS software) results for displacement, stresses and moments generated with and without the consideration of hole in plate and different aspect ratios. In the end comparison for responses in plain and composite square plates has been studied.Keywords: ANSYS, finite element method, plates, static analysis
Procedia PDF Downloads 4541222 Design of an Ensemble Learning Behavior Anomaly Detection Framework
Authors: Abdoulaye Diop, Nahid Emad, Thierry Winter, Mohamed Hilia
Abstract:
Data assets protection is a crucial issue in the cybersecurity field. Companies use logical access control tools to vault their information assets and protect them against external threats, but they lack solutions to counter insider threats. Nowadays, insider threats are the most significant concern of security analysts. They are mainly individuals with legitimate access to companies information systems, which use their rights with malicious intents. In several fields, behavior anomaly detection is the method used by cyber specialists to counter the threats of user malicious activities effectively. In this paper, we present the step toward the construction of a user and entity behavior analysis framework by proposing a behavior anomaly detection model. This model combines machine learning classification techniques and graph-based methods, relying on linear algebra and parallel computing techniques. We show the utility of an ensemble learning approach in this context. We present some detection methods tests results on an representative access control dataset. The use of some explored classifiers gives results up to 99% of accuracy.Keywords: cybersecurity, data protection, access control, insider threat, user behavior analysis, ensemble learning, high performance computing
Procedia PDF Downloads 128