Search results for: secure online algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6610

Search results for: secure online algorithm

160 Big Data Applications for Transportation Planning

Authors: Antonella Falanga, Armando Cartenì

Abstract:

"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.

Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning

Procedia PDF Downloads 36
159 Learning Language through Story: Development of Storytelling Website Project for Amazighe Language Learning

Authors: Siham Boulaknadel

Abstract:

Every culture has its share of a rich history of storytelling in oral, visual, and textual form. The Amazigh language, as many languages, has its own which has entertained and informed across centuries and cultures, and its instructional potential continues to serve teachers. According to many researchers, listening to stories draws attention to the sounds of language and helps children develop sensitivity to the way language works. Stories including repetitive phrases, unique words, and enticing description encourage students to join in actively to repeat, chant, sing, or even retell the story. This kind of practice is important to language learners’ oral language development, which is believed to correlate completely with student’s academic success. Today, with the advent of multimedia, digital storytelling for instance can be a practical and powerful learning tool. It has the potential in transforming traditional learning into a world of unlimited imaginary environment. This paper reports on a research project on development of multimedia Storytelling Website using traditional Amazigh oral narratives called “tell me a story”. It is a didactic tool created for the learning of good moral values in an interactive multimedia environment combining on-screen text, graphics and audio in an enticing environment and enabling the positive values of stories to be projected. This Website developed in this study is based on various pedagogical approaches and learning theories deemed suitable for children age 8 to 9 year-old. The design and development of Website was based on a well-researched conceptual framework enabling users to: (1) re-play and share the stories in schools or at home, and (2) access the Website anytime and anywhere. Furthermore, the system stores the students work and activities over the system, allowing parents or teachers to monitor students’ works, and provide online feedback. The Website contains following main feature modules: Storytelling incorporates a variety of media such as audio, text and graphics in presenting the stories. It introduces the children to various kinds of traditional Amazigh oral narratives. The focus of this module is to project the positive values and images of stories using digital storytelling technique. Besides development good moral sense in children using projected positive images and moral values, it also allows children to practice their comprehending and listening skills. Reading module is developed based on multimedia material approach which offers the potential for addressing the challenges of reading instruction. This module is able to stimulate children and develop reading practice indirectly due to the tutoring strategies of scaffolding, self-explanation and hyperlinks offered in this module. Word Enhancement assists the children in understanding the story and appreciating the good moral values more efficiently. The difficult words or vocabularies are attached to present the explanation, which makes the children understand the vocabulary better. In conclusion, we believe that the interactive multimedia storytelling reveals an interesting and exciting tool for learning Amazigh. We plan to address some learning issues, in particularly the uses of activities to test and evaluate the children on their overall understanding of story and words presented in the learning modules.

Keywords: Amazigh language, e-learning, storytelling, language teaching

Procedia PDF Downloads 374
158 Development of a Bus Information Web System

Authors: Chiyoung Kim, Jaegeol Yim

Abstract:

Bus service is often either main or the only public transportation available in cities. In metropolitan areas, both subways and buses are available whereas in the medium sized cities buses are usually the only type of public transportation available. Bus Information Systems (BIS) provide current locations of running buses, efficient routes to travel from one place to another, points of interests around a given bus stop, a series of bus stops consisting of a given bus route, and so on to users. Thanks to BIS, people do not have to waste time at a bus stop waiting for a bus because BIS provides exact information on bus arrival times at a given bus stop. Therefore, BIS does a lot to promote the use of buses contributing to pollution reduction and saving natural resources. BIS implementation costs a huge amount of budget as it requires a lot of special equipment such as road side equipment, automatic vehicle identification and location systems, trunked radio systems, and so on. Consequently, medium and small sized cities with a low budget cannot afford to install BIS even though people in these cities need BIS service more desperately than people in metropolitan areas. It is possible to provide BIS service at virtually no cost under the assumption that everybody carries a smartphone and there is at least one person with a smartphone in a running bus who is willing to reveal his/her location details while he/she is sitting in a bus. This assumption is usually true in the real world. The smartphone penetration rate is greater than 100% in the developed countries and there is no reason for a bus driver to refuse to reveal his/her location details while driving. We have developed a mobile app that periodically reads values of sensors including GPS and sends GPS data to the server when the bus stops or when the elapsed time from the last send attempt is greater than a threshold. This app detects the bus stop state by investigating the sensor values. The server that receives GPS data from this app has also been developed. Under the assumption that the current locations of all running buses collected by the mobile app are recorded in a database, we have also developed a web site that provides all kinds of information that most BISs provide to users through the Internet. The development environment is: OS: Windows 7 64bit, IDE: Eclipse Luna 4.4.1, Spring IDE 3.7.0, Database: MySQL 5.1.7, Web Server: Apache Tomcat 7.0, Programming Language: Java 1.7.0_79. Given a start and a destination bus stop, it finds a shortest path from the start to the destination using the Dijkstra algorithm. Then, it finds a convenient route considering number of transits. For the user interface, we use the Google map. Template classes that are used by the Controller, DAO, Service and Utils classes include BUS, BusStop, BusListInfo, BusStopOrder, RouteResult, WalkingDist, Location, and so on. We are now integrating the mobile app system and the web app system.

Keywords: bus information system, GPS, mobile app, web site

Procedia PDF Downloads 194
157 Railway Ballast Volumes Automated Estimation Based on LiDAR Data

Authors: Bahar Salavati Vie Le Sage, Ismaïl Ben Hariz, Flavien Viguier, Sirine Noura Kahil, Audrey Jacquin, Maxime Convert

Abstract:

The ballast layer plays a key role in railroad maintenance and the geometry of the track structure. Ballast also holds the track in place as the trains roll over it. Track ballast is packed between the sleepers and on the sides of railway tracks. An imbalance in ballast volume on the tracks can lead to safety issues as well as a quick degradation of the overall quality of the railway segment. If there is a lack of ballast in the track bed during the summer, there is a risk that the rails will expand and buckle slightly due to the high temperatures. Furthermore, the knowledge of the ballast quantities that will be excavated during renewal works is important for efficient ballast management. The volume of excavated ballast per meter of track can be calculated based on excavation depth, excavation width, volume of track skeleton (sleeper and rail) and sleeper spacing. Since 2012, SNCF has been collecting 3D points cloud data covering its entire railway network by using 3D laser scanning technology (LiDAR). This vast amount of data represents a modelization of the entire railway infrastructure, allowing to conduct various simulations for maintenance purposes. This paper aims to present an automated method for ballast volume estimation based on the processing of LiDAR data. The estimation of abnormal volumes in ballast on the tracks is performed by analyzing the cross-section of the track. Further, since the amount of ballast required varies depending on the track configuration, the knowledge of the ballast profile is required. Prior to track rehabilitation, excess ballast is often present in the ballast shoulders. Based on 3D laser scans, a Digital Terrain Model (DTM) was generated and automatic extraction of the ballast profiles from this data is carried out. The surplus in ballast is then estimated by performing a comparison between this ballast profile obtained empirically, and a geometric modelization of the theoretical ballast profile thresholds as dictated by maintenance standards. Ideally, this excess should be removed prior to renewal works and recycled to optimize the output of the ballast renewal machine. Based on these parameters, an application has been developed to allow the automatic measurement of ballast profiles. We evaluated the method on a 108 kilometers segment of railroad LiDAR scans, and the results show that the proposed algorithm detects ballast surplus that amounts to values close to the total quantities of spoil ballast excavated.

Keywords: ballast, railroad, LiDAR , cloud point, track ballast, 3D point

Procedia PDF Downloads 77
156 Stochastic Pi Calculus in Financial Markets: An Alternate Approach to High Frequency Trading

Authors: Jerome Joshi

Abstract:

The paper presents the modelling of financial markets using the Stochastic Pi Calculus model. The Stochastic Pi Calculus model is mainly used for biological applications; however, the feature of this model promotes its use in financial markets, more prominently in high frequency trading. The trading system can be broadly classified into exchange, market makers or intermediary traders and fundamental traders. The exchange is where the action of the trade is executed, and the two types of traders act as market participants in the exchange. High frequency trading, with its complex networks and numerous market participants (intermediary and fundamental traders) poses a difficulty while modelling. It involves the participants to seek the advantage of complex trading algorithms and high execution speeds to carry out large volumes of trades. To earn profits from each trade, the trader must be at the top of the order book quite frequently by executing or processing multiple trades simultaneously. This would require highly automated systems as well as the right sentiment to outperform other traders. However, always being at the top of the book is also not best for the trader, since it was the reason for the outbreak of the ‘Hot – Potato Effect,’ which in turn demands for a better and more efficient model. The characteristics of the model should be such that it should be flexible and have diverse applications. Therefore, a model which has its application in a similar field characterized by such difficulty should be chosen. It should also be flexible in its simulation so that it can be further extended and adapted for future research as well as be equipped with certain tools so that it can be perfectly used in the field of finance. In this case, the Stochastic Pi Calculus model seems to be an ideal fit for financial applications, owing to its expertise in the field of biology. It is an extension of the original Pi Calculus model and acts as a solution and an alternative to the previously flawed algorithm, provided the application of this model is further extended. This model would focus on solving the problem which led to the ‘Flash Crash’ which is the ‘Hot –Potato Effect.’ The model consists of small sub-systems, which can be integrated to form a large system. It is designed in way such that the behavior of ‘noise traders’ is considered as a random process or noise in the system. While modelling, to get a better understanding of the problem, a broader picture is taken into consideration with the trader, the system, and the market participants. The paper goes on to explain trading in exchanges, types of traders, high frequency trading, ‘Flash Crash,’ ‘Hot-Potato Effect,’ evaluation of orders and time delay in further detail. For the future, there is a need to focus on the calibration of the module so that they would interact perfectly with other modules. This model, with its application extended, would provide a basis for researchers for further research in the field of finance and computing.

Keywords: concurrent computing, high frequency trading, financial markets, stochastic pi calculus

Procedia PDF Downloads 55
155 HRCT of the Chest and the Role of Artificial Intelligence in the Evaluation of Patients with COVID-19

Authors: Parisa Mansour

Abstract:

Introduction: Early diagnosis of coronavirus disease (COVID-19) is extremely important to isolate and treat patients in time, thus preventing the spread of the disease, improving prognosis and reducing mortality. High-resolution computed tomography (HRCT) chest imaging and artificial intelligence (AI)-based analysis of HRCT chest images can play a central role in the treatment of patients with COVID-19. Objective: To investigate different chest HRCT findings in different stages of COVID-19 pneumonia and to evaluate the potential role of artificial intelligence in the quantitative assessment of lung parenchymal involvement in COVID-19 pneumonia. Materials and Methods: This retrospective observational study was conducted between May 1, 2020 and August 13, 2020. The study included 2169 patients with COVID-19 who underwent chest HRCT. HRCT images showed the presence and distribution of lesions such as: ground glass opacity (GGO), compaction, and any special patterns such as septal thickening, inverted halo, mark, etc. HRCT findings of the breast at different stages of the disease (early: andlt) 5 days, intermediate: 6-10 days and late stage: >10 days). A CT severity score (CTSS) was calculated based on the extent of lung involvement on HRCT, which was then correlated with clinical disease severity. Use of artificial intelligence; Analysis of CT pneumonia and quot; An algorithm was used to quantify the extent of pulmonary involvement by calculating the percentage of pulmonary opacity (PO) and gross opacity (PHO). Depending on the type of variables, statistically significant tests such as chi-square, analysis of variance (ANOVA) and post hoc tests were applied when appropriate. Results: Radiological findings were observed in HRCT chest in 1438 patients. A typical pattern of COVID-19 pneumonia, i.e., bilateral peripheral GGO with or without consolidation, was observed in 846 patients. About 294 asymptomatic patients were radiologically positive. Chest HRCT in the early stages of the disease mostly showed GGO. The late stage was indicated by such features as retinal enlargement, thickening and the presence of fibrous bands. Approximately 91.3% of cases with a CTSS = 7 were asymptomatic or clinically mild, while 81.2% of cases with a score = 15 were clinically severe. Mean PO and PHO (30.1 ± 28.0 and 8.4 ± 10.4, respectively) were significantly higher in the clinically severe categories. Conclusion: Because COVID-19 pneumonia progresses rapidly, radiologists and physicians should become familiar with typical TC chest findings to treat patients early, ultimately improving prognosis and reducing mortality. Artificial intelligence can be a valuable tool in treating patients with COVID-19.

Keywords: chest, HRCT, covid-19, artificial intelligence, chest HRCT

Procedia PDF Downloads 38
154 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 39
153 The Validation of RadCalc for Clinical Use: An Independent Monitor Unit Verification Software

Authors: Junior Akunzi

Abstract:

In the matter of patient treatment planning quality assurance in 3D conformational therapy (3D-CRT) and volumetric arc therapy (VMAT or RapidArc), the independent monitor unit verification calculation (MUVC) is an indispensable part of the process. Concerning 3D-CRT treatment planning, the MUVC can be performed manually applying the standard ESTRO formalism. However, due to the complex shape and the amount of beams in advanced treatment planning technic such as RapidArc, the manual independent MUVC is inadequate. Therefore, commercially available software such as RadCalc can be used to perform the MUVC in complex treatment planning been. Indeed, RadCalc (version 6.3 LifeLine Inc.) uses a simplified Clarkson algorithm to compute the dose contribution for individual RapidArc fields to the isocenter. The purpose of this project is the validation of RadCalc in 3D-CRT and RapidArc for treatment planning dosimetry quality assurance at Antoine Lacassagne center (Nice, France). Firstly, the interfaces between RadCalc and our treatment planning systems (TPS) Isogray (version 4.2) and Eclipse (version13.6) were checked for data transfer accuracy. Secondly, we created test plans in both Isogray and Eclipse featuring open fields, wedges fields, and irregular MLC fields. These test plans were transferred from TPSs according to the radiotherapy protocol of DICOM RT to RadCalc and the linac via Mosaiq (version 2.5). Measurements were performed in water phantom using a PTW cylindrical semiflex ionisation chamber (0.3 cm³, 31010) and compared with the TPSs and RadCalc calculation. Finally, 30 3D-CRT plans and 40 RapidArc plans created with patients CT scan were recalculated using the CT scan of a solid PMMA water equivalent phantom for 3D-CRT and the Octavius II phantom (PTW) CT scan for RapidArc. Next, we measure the doses delivered into these phantoms for each plan with a 0.3 cm³ PTW 31010 cylindrical semiflex ionisation chamber (3D-CRT) and 0.015 cm³ PTW PinPoint ionisation chamber (Rapidarc). For our test plans, good agreements were found between calculation (RadCalc and TPSs) and measurement (mean: 1.3%; standard deviation: ± 0.8%). Regarding the patient plans, the measured doses were compared to the calculation in RadCalc and in our TPSs. Moreover, RadCalc calculations were compared to Isogray and Eclispse ones. Agreements better than (2.8%; ± 1.2%) were found between RadCalc and TPSs. As for the comparison between calculation and measurement the agreement for all of our plans was better than (2.3%; ± 1.1%). The independent MU verification calculation software RadCal has been validated for clinical use and for both 3D-CRT and RapidArc techniques. The perspective of this project includes the validation of RadCal for the Tomotherapy machine installed at centre Antoine Lacassagne.

Keywords: 3D conformational radiotherapy, intensity modulated radiotherapy, monitor unit calculation, dosimetry quality assurance

Procedia PDF Downloads 190
152 Measures of Reliability and Transportation Quality on an Urban Rail Transit Network in Case of Links’ Capacities Loss

Authors: Jie Liu, Jinqu Cheng, Qiyuan Peng, Yong Yin

Abstract:

Urban rail transit (URT) plays a significant role in dealing with traffic congestion and environmental problems in cities. However, equipment failure and obstruction of links often lead to URT links’ capacities loss in daily operation. It affects the reliability and transport service quality of URT network seriously. In order to measure the influence of links’ capacities loss on reliability and transport service quality of URT network, passengers are divided into three categories in case of links’ capacities loss. Passengers in category 1 are less affected by the loss of links’ capacities. Their travel is reliable since their travel quality is not significantly reduced. Passengers in category 2 are affected by the loss of links’ capacities heavily. Their travel is not reliable since their travel quality is reduced seriously. However, passengers in category 2 still can travel on URT. Passengers in category 3 can not travel on URT because their travel paths’ passenger flow exceeds capacities. Their travel is not reliable. Thus, the proportion of passengers in category 1 whose travel is reliable is defined as reliability indicator of URT network. The transport service quality of URT network is related to passengers’ travel time, passengers’ transfer times and whether seats are available to passengers. The generalized travel cost is a comprehensive reflection of travel time, transfer times and travel comfort. Therefore, passengers’ average generalized travel cost is used as transport service quality indicator of URT network. The impact of links’ capacities loss on transport service quality of URT network is measured with passengers’ relative average generalized travel cost with and without links’ capacities loss. The proportion of the passengers affected by links and betweenness of links are used to determine the important links in URT network. The stochastic user equilibrium distribution model based on the improved logit model is used to determine passengers’ categories and calculate passengers’ generalized travel cost in case of links’ capacities loss, which is solved with method of successive weighted averages algorithm. The reliability and transport service quality indicators of URT network are calculated with the solution result. Taking Wuhan Metro as a case, the reliability and transport service quality of Wuhan metro network is measured with indicators and method proposed in this paper. The result shows that using the proportion of the passengers affected by links can identify important links effectively which have great influence on reliability and transport service quality of URT network; The important links are mostly connected to transfer stations and the passenger flow of important links is high; With the increase of number of failure links and the proportion of capacity loss, the reliability of the network keeps decreasing, the proportion of passengers in category 3 keeps increasing and the proportion of passengers in category 2 increases at first and then decreases; When the number of failure links and the proportion of capacity loss increased to a certain level, the decline of transport service quality is weakened.

Keywords: urban rail transit network, reliability, transport service quality, links’ capacities loss, important links

Procedia PDF Downloads 108
151 A Two-Step, Temperature-Staged, Direct Coal Liquefaction Process

Authors: Reyna Singh, David Lokhat, Milan Carsky

Abstract:

The world crude oil demand is projected to rise to 108.5 million bbl/d by the year 2035. With reserves estimated at 869 billion tonnes worldwide, coal is an abundant resource. This work was aimed at producing a high value hydrocarbon liquid product from the Direct Coal Liquefaction (DCL) process at, comparatively, mild operating conditions. Via hydrogenation, the temperature-staged approach was investigated. In a two reactor lab-scale pilot plant facility, the objectives included maximising thermal dissolution of the coal in the presence of a hydrogen donor solvent in the first stage, subsequently promoting hydrogen saturation and hydrodesulphurization (HDS) performance in the second. The feed slurry consisted of high grade, pulverized bituminous coal on a moisture-free basis with a size fraction of < 100μm; and Tetralin mixed in 2:1 and 3:1 solvent/coal ratios. Magnetite (Fe3O4) at 0.25wt% of the dry coal feed was added for the catalysed runs. For both stages, hydrogen gas was used to maintain a system pressure of 100barg. In the first stage, temperatures of 250℃ and 300℃, reaction times of 30 and 60 minutes were investigated in an agitated batch reactor. The first stage liquid product was pumped into the second stage vertical reactor, which was designed to counter-currently contact the hydrogen rich gas stream and incoming liquid flow in the fixed catalyst bed. Two commercial hydrotreating catalysts; Cobalt-Molybdenum (CoMo) and Nickel-Molybdenum (NiMo); were compared in terms of their conversion, selectivity and HDS performance at temperatures 50℃ higher than the respective first stage tests. The catalysts were activated at 300°C with a hydrogen flowrate of approximately 10 ml/min prior to the testing. A gas-liquid separator at the outlet of the reactor ensured that the gas was exhausted to the online VARIOplus gas analyser. The liquid was collected and sampled for analysis using Gas Chromatography-Mass Spectrometry (GC-MS). Internal standard quantification methods for the sulphur content, the BTX (benzene, toluene, and xylene) and alkene quality; alkanes and polycyclic aromatic hydrocarbon (PAH) compounds in the liquid products were guided by ASTM standards of practice for hydrocarbon analysis. In the first stage, using a 2:1 solvent/coal ratio, an increased coal to liquid conversion was favoured by a lower operating temperature of 250℃, 60 minutes and a system catalysed by magnetite. Tetralin functioned effectively as the hydrogen donor solvent. A 3:1 ratio favoured increased concentrations of the long chain alkanes undecane and dodecane, unsaturated alkenes octene and nonene and PAH compounds such as indene. The second stage product distribution showed an increase in the BTX quality of the liquid product, branched chain alkanes and a reduction in the sulphur concentration. As an HDS performer and selectivity to the production of long and branched chain alkanes, NiMo performed better than CoMo. CoMo is selective to a higher concentration of cyclohexane. For 16 days on stream each, NiMo had a higher activity than CoMo. The potential to cover the demand for low–sulphur, crude diesel and solvents from the production of high value hydrocarbon liquid in the said process, is thus demonstrated.

Keywords: catalyst, coal, liquefaction, temperature-staged

Procedia PDF Downloads 624
150 Impact of Six-Minute Walk or Rest Break during Extended GamePlay on Executive Function in First Person Shooter Esport Players

Authors: Joanne DiFrancisco-Donoghue, Seth E. Jenny, Peter C. Douris, Sophia Ahmad, Kyle Yuen, Hillary Gan, Kenney Abraham, Amber Sousa

Abstract:

Background: Guidelines for the maintenance of health of esports players and the cognitive changes that accompany competitive gaming are understudied. Executive functioning is an important cognitive skill for an esports player. The relationship between executive functions and physical exercise has been well established. However, the effects of prolonged sitting regardless of physical activity level have not been established. Prolonged uninterrupted sitting reduces cerebral blood flow. Reduced cerebral blood flow is associated with lower cognitive function and fatigue. This decrease in cerebral blood flow has been shown to be offset by frequent and short walking breaks. These short breaks can be as little as 2 minutes at the 30-minute mark and 6 minutes following 60 minutes of prolonged sitting. The rationale is the increase in blood flow and the positive effects this has on metabolic responses. The primary purpose of this study was to evaluate executive function changes following 6-minute bouts of walking and complete rest mid-session, compared to no break, during prolonged gameplay in competitive first-person shooter (FPS) esports players. Methods: This study was conducted virtually due to the Covid-19 pandemic and was approved by the New York Institute of Technology IRB. Twelve competitive FPS participants signed written consent to participate in this randomized pilot study. All participants held a gold ranking or higher. Participants were asked to play for 2 hours on three separate days. Outcome measures to test executive function included the Color Stroop and the Tower of London tests which were administered online each day prior to gaming and at the completion of gaming. All participants completed the tests prior to testing for familiarization. One day of testing consisted of a 6-minute walk break after 60-75 minutes of play. The Rate of Perceived Exertion (RPE) was recorded. The participant continued to play for another 60-75 minutes and completed the tests again. Another day the participants repeated the same methods replacing the 6-minute walk with lying down and resting for 6 minutes. On the last day, the participant played continuously with no break for 2 hours and repeated the outcome tests pre and post-play. A Latin square was used to randomize the treatment order. Results: Using descriptive statistics, the largest change in mean reaction time incorrect congruent pre to post play was seen following the 6-minute walk (662.0 (609.6) ms pre to 602.8 (539.2) ms post), followed by the 6-minute rest group (681.7(618.1) ms pre to 666.3 (607.9) ms post), and with minimal change in the continuous group (594.0(534.1) ms pre to 589.6(552.9) ms post). The mean solution time was fastest in the resting condition (7774.6(6302.8)ms), followed by the walk condition (7929.4 (5992.8)ms), with the continuous condition being slowest (9337.3(7228.7)ms). the continuous group 9337.3(7228.7) ms; 7929.4 (5992.8 ) ms 774.6(6302.8) ms. Conclusion: Short walking breaks improve blood flow and reduce the risk of venous thromboembolism during prolonged sitting. This pilot study demonstrated that a low intensity 6 -minute walk break, following 60 minutes of play, may also improve executive function in FPS gamers.

Keywords: executive function, FPS, physical activity, prolonged sitting

Procedia PDF Downloads 201
149 Role of Artificial Intelligence in Nano Proteomics

Authors: Mehrnaz Mostafavi

Abstract:

Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.

Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence

Procedia PDF Downloads 45
148 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 71
147 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model

Authors: Mohammad Zamani, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.

Keywords: circular vertical, spillway, numerical model, boundary conditions

Procedia PDF Downloads 57
146 Fueling Efficient Reporting And Decision-Making In Public Health With Large Data Automation In Remote Areas, Neno Malawi

Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Julia Huggins, Fabien Munyaneza

Abstract:

Background: Partners In Health – Malawi introduced one of Operational Researches called Primary Health Care (PHC) Surveys in 2020, which seeks to assess progress of delivery of care in the district. The study consists of 5 long surveys, namely; Facility assessment, General Patient, Provider, Sick Child, Antenatal Care (ANC), primarily conducted in 4 health facilities in Neno district. These facilities include Neno district hospital, Dambe health centre, Chifunga and Matope. Usually, these annual surveys are conducted from January, and the target is to present final report by June. Once data is collected and analyzed, there are a series of reviews that take place before reaching final report. In the first place, the manual process took over 9 months to present final report. Initial findings reported about 76.9% of the data that added up when cross-checked with paper-based sources. Purpose: The aim of this approach is to run away from manually pulling the data, do fresh analysis, and reporting often associated not only with delays in reporting inconsistencies but also with poor quality of data if not done carefully. This automation approach was meant to utilize features of new technologies to create visualizations, reports, and dashboards in Power BI that are directly fished from the data source – CommCare hence only require a single click of a ‘refresh’ button to have the updated information populated in visualizations, reports, and dashboards at once. Methodology: We transformed paper-based questionnaires into electronic using CommCare mobile application. We further connected CommCare Mobile App directly to Power BI using Application Program Interface (API) connection as data pipeline. This provided chance to create visualizations, reports, and dashboards in Power BI. Contrary to the process of manually collecting data in paper-based questionnaires, entering them in ordinary spreadsheets, and conducting analysis every time when preparing for reporting, the team utilized CommCare and Microsoft Power BI technologies. We utilized validations and logics in CommCare to capture data with less errors. We utilized Power BI features to host the reports online by publishing them as cloud-computing process. We switched from sharing ordinary report files to sharing the link to potential recipients hence giving them freedom to dig deep into extra findings within Power BI dashboards and also freedom to export to any formats of their choice. Results: This data automation approach reduced research timelines from the initial 9 months’ duration to 5. It also improved the quality of the data findings from the original 76.9% to 98.9%. This brought confidence to draw conclusions from the findings that help in decision-making and gave opportunities for further researches. Conclusion: These results suggest that automating the research data process has the potential of reducing overall amount of time spent and improving the quality of the data. On this basis, the concept of data automation should be taken into serious consideration when conducting operational research for efficiency and decision-making.

Keywords: reporting, decision-making, power BI, commcare, data automation, visualizations, dashboards

Procedia PDF Downloads 93
145 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution

Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino

Abstract:

This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.

Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization

Procedia PDF Downloads 109
144 Design and Construction of a Home-Based, Patient-Led, Therapeutic, Post-Stroke Recovery System Using Iterative Learning Control

Authors: Marco Frieslaar, Bing Chu, Eric Rogers

Abstract:

Stroke is a devastating illness that is the second biggest cause of death in the world (after heart disease). Where it does not kill, it leaves survivors with debilitating sensory and physical impairments that not only seriously harm their quality of life, but also cause a high incidence of severe depression. It is widely accepted that early intervention is essential for recovery, but current rehabilitation techniques largely favor hospital-based therapies which have restricted access, expensive and specialist equipment and tend to side-step the emotional challenges. In addition, there is insufficient funding available to provide the long-term assistance that is required. As a consequence, recovery rates are poor. The relatively unexplored solution is to develop therapies that can be harnessed in the home and are formulated from technologies that already exist in everyday life. This would empower individuals to take control of their own improvement and provide choice in terms of when and where they feel best able to undertake their own healing. This research seeks to identify how effective post-stroke, rehabilitation therapy can be applied to upper limb mobility, within the physical context of a home rather than a hospital. This is being achieved through the design and construction of an automation scheme, based on iterative learning control and the Riener muscle model, that has the ability to adapt to the user and react to their level of fatigue and provide tangible physical recovery. It utilizes a SMART Phone and laptop to construct an iterative learning control (ILC) system, that monitors upper arm movement in three dimensions, as a series of exercises are undertaken. The equipment generates functional electrical stimulation to assist in muscle activation and thus improve directional accuracy. In addition, it monitors speed, accuracy, areas of motion weakness and similar parameters to create a performance index that can be compared over time and extrapolated to establish an independent and objective assessment scheme, plus an approximate estimation of predicted final outcome. To further extend its assessment capabilities, nerve conduction velocity readings are taken by the software, between the shoulder and hand muscles. This is utilized to measure the speed of response of neuron signal transfer along the arm and over time, an online indication of regeneration levels can be obtained. This will prove whether or not sufficient training intensity is being achieved even before perceivable movement dexterity is observed. The device also provides the option to connect to other users, via the internet, so that the patient can avoid feelings of isolation and can undertake movement exercises together with others in a similar position. This should create benefits not only for the encouragement of rehabilitation participation, but also an emotional support network potential. It is intended that this approach will extend the availability of stroke recovery options, enable ease of access at a low cost, reduce susceptibility to depression and through these endeavors, enhance the overall recovery success rate.

Keywords: home-based therapy, iterative learning control, Riener muscle model, SMART phone, stroke rehabilitation

Procedia PDF Downloads 243
143 Shared Versus Pooled Automated Vehicles: Exploring Behavioral Intentions Towards On-Demand Automated Vehicles

Authors: Samira Hamiditehrani

Abstract:

Automated vehicles (AVs) are emerging technologies that could potentially offer a wide range of opportunities and challenges for the transportation sector. The advent of AV technology has also resulted in new business models in shared mobility services where many ride hailing and car sharing companies are developing on-demand AVs including shared automated vehicles (SAVs) and pooled automated vehicles (Pooled AVs). SAVs and Pooled AVs could provide alternative shared mobility services which encourage sustainable transport systems, mitigate traffic congestion, and reduce automobile dependency. However, the success of on-demand AVs in addressing major transportation policy issues depends on whether and how the public adopts them as regular travel modes. To identify conditions under which individuals may adopt on-demand AVs, previous studies have applied human behavior and technology acceptance theories, where Theory of Planned Behavior (TPB) has been validated and is among the most tested in on-demand AV research. In this respect, this study has three objectives: (a) to propose and validate a theoretical model for behavioral intention to use SAVs and Pooled AVs by extending the original TPB model; (b) to identify the characteristics of early adopters of SAVs, who prefer to have a shorter and private ride, versus prospective users of Pooled AVs, who choose more affordable but longer and shared trips; and (c) to investigate Canadians’ intentions to adopt on-demand AVs for regular trips. Toward this end, this study uses data from an online survey (n = 3,622) of workers or adult students (18 to 75 years old) conducted in October and November 2021 for six major Canadian metropolitan areas: Toronto, Vancouver, Ottawa, Montreal, Calgary, and Hamilton. To accomplish the goals of this study, a base bivariate ordered probit model, in which both SAV and Pooled AV adoptions are estimated as ordered dependent variables, alongside a full structural equation modeling (SEM) system are estimated. The findings of this study indicate that affective motivations such as attitude towards AV technology, perceived privacy, and subjective norms, matter more than sociodemographic and travel behavior characteristic in adopting on-demand AVs. Also, the results of second objective provide evidence that although there are a few affective motivations, such as subjective norms and having ample knowledge, that are common between early adopters of SAVs and PooledAVs, many examined motivations differ among SAV and Pooled AV adoption factors. In other words, motivations influencing intention to use on-demand AVs differ among the service types. Likewise, depending on the types of on-demand AVs, the sociodemographic characteristics of early adopters differ significantly. In general, findings paint a complex picture with respect to the application of constructs from common technology adoption models to the study of on-demand AVs. Findings from the final objective suggest that policymakers, planners, the vehicle and technology industries, and the public at large should moderate their expectations that on-demand AVs may suddenly transform the entire transportation sector. Instead, this study suggests that SAVs and Pooled AVs (when they entire the Canadian market) are likely to be adopted as supplementary mobility tools rather than substitutions for current travel modes

Keywords: automated vehicles, Canadian perception, theory of planned behavior, on-demand AVs

Procedia PDF Downloads 42
142 Web and Smart Phone-based Platform Combining Artificial Intelligence and Satellite Remote Sensing Data to Geoenable Villages for Crop Health Monitoring

Authors: Siddhartha Khare, Nitish Kr Boro, Omm Animesh Mishra

Abstract:

Recent food price hikes may signal the end of an era of predictable global grain crop plenty due to climate change, population expansion, and dietary changes. Food consumption will treble in 20 years, requiring enormous production expenditures. Climate and the atmosphere changed owing to rainfall and seasonal cycles in the past decade. India's tropical agricultural relies on evapotranspiration and monsoons. In places with limited resources, the global environmental change affects agricultural productivity and farmers' capacity to adjust to changing moisture patterns. Motivated by these difficulties, satellite remote sensing might be combined with near-surface imaging data (smartphones, UAVs, and PhenoCams) to enable phenological monitoring and fast evaluations of field-level consequences of extreme weather events on smallholder agriculture output. To accomplish this technique, we must digitally map all communities agricultural boundaries and crop kinds. With the improvement of satellite remote sensing technologies, a geo-referenced database may be created for rural Indian agriculture fields. Using AI, we can design digital agricultural solutions for individual farms. Main objective is to Geo-enable each farm along with their seasonal crop information by combining Artificial Intelligence (AI) with satellite and near-surface data and then prepare long term crop monitoring through in-depth field analysis and scanning of fields with satellite derived vegetation indices. We developed an AI based algorithm to understand the timelapse based growth of vegetation using PhenoCam or Smartphone based images. We developed an android platform where user can collect images of their fields based on the android application. These images will be sent to our local server, and then further AI based processing will be done at our server. We are creating digital boundaries of individual farms and connecting these farms with our smart phone application to collect information about farmers and their crops in each season. We are extracting satellite-based information for each farm from Google earth engine APIs and merging this data with our data of tested crops from our app according to their farm’s locations and create a database which will provide the data of quality of crops from their location.

Keywords: artificial intelligence, satellite remote sensing, crop monitoring, android and web application

Procedia PDF Downloads 70
141 The Underground Ecosystem of Credit Card Frauds

Authors: Abhinav Singh

Abstract:

Point Of Sale (POS) malwares have been stealing the limelight this year. They have been the elemental factor in some of the biggest breaches uncovered in past couple of years. Some of them include • Target: A Retail Giant reported close to 40 million credit card data being stolen • Home Depot : A home product Retailer reported breach of close to 50 million credit records • Kmart: A US retailer recently announced breach of 800 thousand credit card details. Alone in 2014, there have been reports of over 15 major breaches of payment systems around the globe. Memory scrapping malwares infecting the point of sale devices have been the lethal weapon used in these attacks. These malwares are capable of reading the payment information from the payment device memory before they are being encrypted. Later on these malwares send the stolen details to its parent server. These malwares are capable of recording all the critical payment information like the card number, security number, owner etc. All these information are delivered in raw format. This Talk will cover the aspects of what happens after these details have been sent to the malware authors. The entire ecosystem of credit card frauds can be broadly classified into these three steps: • Purchase of raw details and dumps • Converting them to plastic cash/cards • Shop! Shop! Shop! The focus of this talk will be on the above mentioned points and how they form an organized network of cyber-crime. The first step involves buying and selling of the stolen details. The key point to emphasize are : • How is this raw information been sold in the underground market • The buyer and seller anatomy • Building your shopping cart and preferences • The importance of reputation and vouches • Customer support and replace/refunds These are some of the key points that will be discussed. But the story doesn’t end here. As of now the buyer only has the raw card information. How will this raw information be converted to plastic cash? Now comes in picture the second part of this underground economy where-in these raw details are converted into actual cards. There are well organized services running underground that can help you in converting these details into plastic cards. We will discuss about this technique in detail. At last, the final step involves shopping with the stolen cards. The cards generated with the stolen details can be easily used to swipe-and-pay for purchased goods at different retail shops. Usually these purchases are of expensive items that have good resale value. Apart from using the cards at stores, there are underground services that lets you deliver online orders to their dummy addresses. Once the package is received it will be delivered to the original buyer. These services charge based on the value of item that is being delivered. The overall underground ecosystem of credit card fraud works in a bulletproof way and it involves people working in close groups and making heavy profits. This is a brief summary of what I plan to present at the talk. I have done an extensive research and have collected good deal of material to present as samples. Some of them include: • List of underground forums • Credit card dumps • IRC chats among these groups • Personal chat with big card sellers • Inside view of these forum owners. The talk will be concluded by throwing light on how these breaches are being tracked during investigation. How are credit card breaches tracked down and what steps can financial institutions can build an incidence response over it.

Keywords: POS mawalre, credit card frauds, enterprise security, underground ecosystem

Procedia PDF Downloads 411
140 Invisible to Invaluable - How Social Media is Helping Tackle Stigma and Discrimination Against Informal Waste Pickers of Bengaluru

Authors: Varinder Kaur Gambhir, Neema Gupta, Sonal Tickoo Chaudhuri

Abstract:

Bengaluru, a rapidly growing metropolis in India, with a population of 12.5 million citizens, generates 5,757 metric tonnes of solid waste per day. Despite their invaluable contribution to waste management, society and the economy, waste pickers face significant stigma, suspicion and contempt and are left with a sense of shame about their work. In this context, BBC Media Action was funded by the H&M Foundation to develop a 3-year multi-phase social media campaign to shift perceptions of waste picking and informal waste pickers amongst the Bengaluru population. Research has been used to inform project strategy and adaptation, at all stages. Formative research to inform campaign strategy used mixed methods– 14 focused group discussions followed by 406 online surveys – to explore people’s knowledge of, and attitudes towards waste pickers, and identify potential barriers and motivators to changing perceptions. Use of qualitative techniques like metaphor maps (using bank of pictures rather than direct questions to understand mindsets) helped establish the invisibility of informal waste pickers, and the quantitative research enabled audience segmentation based on attitudes towards informal waste pickers. To pretest the campaign idea, eight I-GDs (individual interaction followed by group discussions) were conducted to allow interviewees to first freely express their feelings individually, before discussing in a group. Robert Plucthik’s ‘wheel of emotions’ was used to understand audience’s emotional response to the content. A robust monitoring and evaluation is being conducted (baseline and first phase of monitoring already completed) using a rotating longitudinal panel of 1,800 social media users (exposed and unexposed to the campaign), recruited face to face and representative of the social media universe of Bengaluru city. In addition, qualitative in-depth interviews are being conducted after each phase to better understand change drivers. The research methodology and ethical protocols for impact evaluation have been independently reviewed by an Institutional Review Board. Formative research revealed that while waste on the streets is visible and is of concern to the public, informal waste pickers are virtually ‘invisible’, for most people in Bengaluru Pretesting research revealed that the creative outputs evoked emotions like acceptance and gratitude towards waste-pickers, suggesting that the content had the potential to encourage attitudinal change. After the first phase of campaign, social media analytics show that #Invaluables content reached at least 2.6 million unique people (21% of the Bengaluru population) through Facebook and Instagram. Further, impact monitoring results show significant improvements in spontaneous awareness of different segments of informal waste pickers ( such as sorters at scrap shops or dry waste collection centres -from 10% at baseline to 16% amongst exposed and no change amongst unexposed), recognition that informal waste pickers help the environment (71% at baseline to 77% among exposed and no change among unexposed) and greater discussion about informal waste pickers among those exposed (60%) as against not exposed (49%). Using the insights from this research, the planned social media intervention is designed to increase the visibility of and appreciation for the work of waste pickers in Bengaluru, supporting a more inclusive society.

Keywords: awareness, discussion, discrimination, informal waste pickers, invisibility, social media campaign, waste management

Procedia PDF Downloads 72
139 Estimation of Soil Nutrient Content Using Google Earth and Pleiades Satellite Imagery for Small Farms

Authors: Lucas Barbosa Da Silva, Jun Okamoto Jr.

Abstract:

Precision Agriculture has long being benefited from crop fields’ aerial imagery. This important tool has allowed identifying patterns in crop fields, generating useful information to the production management. Reflectance intensity data in different ranges from the electromagnetic spectrum may indicate presence or absence of nutrients in the soil of an area. Different relations between the different light bands may generate even more detailed information. The knowledge of the nutrients content in the soil or in the crop during its growth is a valuable asset to the farmer that seeks to optimize its yield. However, small farmers in Brazil often lack the resources to access this kind information, and, even when they do, it is not presented in a comprehensive and/or objective way. So, the challenges of implementing this technology ranges from the sampling of the imagery, using aerial platforms, building of a mosaic with the images to cover the entire crop field, extracting the reflectance information from it and analyzing its relationship with the parameters of interest, to the display of the results in a manner that the farmer may take the necessary decisions more objectively. In this work, it’s proposed an analysis of soil nutrient contents based on image processing of satellite imagery and comparing its outtakes with commercial laboratory’s chemical analysis. Also, sources of satellite imagery are compared, to assess the feasibility of using Google Earth data in this application, and the impacts of doing so, versus the application of imagery from satellites like Landsat-8 and Pleiades. Furthermore, an algorithm for building mosaics is implemented using Google Earth imagery and finally, the possibility of using unmanned aerial vehicles is analyzed. From the data obtained, some soil parameters are estimated, namely, the content of Potassium, Phosphorus, Boron, Manganese, among others. The suitability of Google Earth Imagery for this application is verified within a reasonable margin, when compared to Pleiades Satellite imagery and to the current commercial model. It is also verified that the mosaic construction method has little or no influence on the estimation results. Variability maps are created over the covered area and the impacts of the image resolution and sample time frame are discussed, allowing easy assessments of the results. The final results show that easy and cheaper remote sensing and analysis methods are possible and feasible alternatives for the small farmer, with little access to technological and/or financial resources, to make more accurate decisions about soil nutrient management.

Keywords: remote sensing, precision agriculture, mosaic, soil, nutrient content, satellite imagery, aerial imagery

Procedia PDF Downloads 149
138 Introducing Transport Engineering through Blended Learning Initiatives

Authors: Kasun P. Wijayaratna, Lauren Gardner, Taha Hossein Rashidi

Abstract:

Undergraduate students entering university across the last 2 to 3 years tend to be born during the middle years of the 1990s. This generation of students has been exposed to the internet and the desire and dependency on technology since childhood. Brains develop based on environmental influences and technology has wired this generation of student to be attuned to sophisticated complex visual imagery, indicating visual forms of learning may be more effective than the traditional lecture or discussion formats. Furthermore, post-millennials perspectives on career are not focused solely on stability and income but are strongly driven by interest, entrepreneurship and innovation. Accordingly, it is important for educators to acknowledge the generational shift and tailor the delivery of learning material to meet the expectations of the students and the needs of industry. In the context of transport engineering, effectively teaching undergraduate students the basic principles of transport planning, traffic engineering and highway design is fundamental to the progression of the profession from a practice and research perspective. Recent developments in technology have transformed the discipline as practitioners and researchers move away from the traditional “pen and paper” approach to methods involving the use of computer programs and simulation. Further, enhanced accessibility of technology for students has changed the way they understand and learn material being delivered at tertiary education institutions. As a consequence, blended learning approaches, which aim to integrate face to face teaching with flexible self-paced learning resources, have become prevalent to provide scalable education that satisfies the expectations of students. This research study involved the development of a series of ‘Blended Learning’ initiatives implemented within an introductory transport planning and geometric design course, CVEN2401: Sustainable Transport and Highway Engineering, taught at the University of New South Wales, Australia. CVEN2401 was modified by conducting interactive polling exercises during lectures, including weekly online quizzes, offering a series of supplementary learning videos, and implementing a realistic design project that students needed to complete using modelling software that is widely used in practice. These activities and resources were aimed to improve the learning environment for a large class size in excess of 450 students and to ensure that practical industry valued skills were introduced. The case study compared the 2016 and 2017 student cohorts based on their performance across assessment tasks as well as their reception to the material revealed through student feedback surveys. The initiatives were well received with a number of students commenting on the ability to complete self-paced learning and an appreciation of the exposure to a realistic design project. From an educator’s perspective, blending the course made it feasible to interact and engage with students. Personalised learning opportunities were made available whilst delivering a considerable volume of complex content essential for all undergraduate Civil and Environmental Engineering students. Overall, this case study highlights the value of blended learning initiatives, especially in the context of large class size university courses.

Keywords: blended learning, highway design, teaching, transport planning

Procedia PDF Downloads 125
137 The Dynamic Nexus of Public Health and Journalism in Informed Societies

Authors: Ali Raza

Abstract:

The dynamic landscape of communication has brought about significant advancements that intersect with the realms of public health and journalism. This abstract explores the evolving synergy between these fields, highlighting how their intersection has contributed to informed societies and improved public health outcomes. In the digital age, communication plays a pivotal role in shaping public perception, policy formulation, and collective action. Public health, concerned with safeguarding and improving community well-being, relies on effective communication to disseminate information, encourage healthy behaviors, and mitigate health risks. Simultaneously, journalism, with its commitment to accurate and timely reporting, serves as the conduit through which health information reaches the masses. Advancements in communication technologies have revolutionized the ways in which public health information is both generated and shared. The advent of social media platforms, mobile applications, and online forums has democratized the dissemination of health-related news and insights. This democratization, however, brings challenges, such as the rapid spread of misinformation and the need for nuanced strategies to engage diverse audiences. Effective collaboration between public health professionals and journalists is pivotal in countering these challenges, ensuring that accurate information prevails. The synergy between public health and journalism is most evident during public health crises. The COVID-19 pandemic underscored the pivotal role of journalism in providing accurate and up-to-date information to the public. However, it also highlighted the importance of responsible reporting, as sensationalism and misinformation could exacerbate the crisis. Collaborative efforts between public health experts and journalists led to the amplification of preventive measures, the debunking of myths, and the promotion of evidence-based interventions. Moreover, the accessibility of information in the digital era necessitates a strategic approach to health communication. Behavioral economics and data analytics offer insights into human decision-making and allow tailored health messages to resonate more effectively with specific audiences. This approach, when integrated into journalism, enables the crafting of narratives that not only inform but also influence positive health behaviors. Ethical considerations emerge prominently in this alliance. The responsibility to balance the public's right to know with the potential consequences of sensational reporting underscores the significance of ethical journalism. Health journalists must meticulously source information from reputable experts and institutions to maintain credibility, thus fortifying the bridge between public health and the public. As both public health and journalism undergo transformative shifts, fostering collaboration between these domains becomes essential. Training programs that familiarize journalists with public health concepts and practices can enhance their capacity to report accurately and comprehensively on health issues. Likewise, public health professionals can gain insights into effective communication strategies from seasoned journalists, ensuring that health information reaches a wider audience. In conclusion, the convergence of public health and journalism, facilitated by communication advancements, is a cornerstone of informed societies. Effective communication strategies, driven by collaboration, ensure the accurate dissemination of health information and foster positive behavior change. As the world navigates complex health challenges, the continued evolution of this synergy holds the promise of healthier communities and a more engaged and educated public.

Keywords: public awareness, journalism ethics, health promotion, media influence, health literacy

Procedia PDF Downloads 45
136 Impact of Interdisciplinary Therapy Allied to Online Health Education on Cardiometabolic Parameters and Inflammation Factor Rating in Obese Adolescents

Authors: Yasmin A. M. Ferreira, Ana C. K. Pelissari, Sofia De C. F. Vicente, Raquel M. Da S. Campos, Deborah C. L. Masquio, Lian Tock, Lila M. Oyama, Flavia C. Corgosinho, Valter T. Boldarine, Ana R. Dâmaso

Abstract:

The prevalence of overweight and obesity is growing around the world and currently considered a global epidemic. Food and nutrition are essential requirements for promoting health and protecting non-communicable chronic diseases, such as obesity and cardiovascular disease. Specific dietary components may modulate the inflammation and oxidative stress in obese individuals. Few studies have investigated the dietary Inflammation Factor Rating (IFR) in obese adolescents. The IFR was developed to characterize an individual´s diet on anti- to pro-inflammatory score. This evaluation contributes to investigate the effects of inflammatory diet in metabolic profile in several individual conditions. Objectives: The present study aims to investigate the effects of a multidisciplinary weight loss therapy on inflammation factor rating and cardiometabolic risk in obese adolescents. Methods: A total of 26 volunteers (14-19 y.o) were recruited and submitted to 20 weeks interdisciplinary therapy allied to health education website- Ciclo do Emagrecimento®, including clinical, nutritional, psychological counseling and exercise training. The body weight was monitored weekly by self-report and photo. The adolescents answered a test to evaluate the knowledge of the topics covered in the videos. A 24h dietary record was applied at the baseline and after 20 weeks to assess the food intake and to calculate IFR. A negative IFR suggests that diet may have inflammatory effects and a positive IFR indicates an anti-inflammatory effect. Statistical analysis was performed using the program STATISTICA version 12.5 for Windows. The adopted significant value was α ≤ 5 %. Data normality was verified with the Kolmogorov Smirnov test. Data were expressed as mean±SD values. To analyze the effects of intervention it was applied test t. Pearson´s correlations test was performed. Results: After 20 weeks of treatment, body mass index (BMI), body weight, body fat (kg and %), abdominal and waist circumferences decreased significantly. The mean of high-density lipoprotein cholesterol (HDL-c) increased after the therapy. Moreover, it was found an improvement of inflammation factor rating from -427,27±322,47 to -297,15±240,01, suggesting beneficial effects of nutritional counselling. Considering the correlations analysis, it was found that pro-inflammatory diet is associated with increase in the BMI, very low-density lipoprotein cholesterol (VLDL), triglycerides, insulin and insulin resistance index (HOMA-IR); while an anti-inflammatory diet is associated with improvement of HDL-c and insulin sensitivity Check index (QUICKI). Conclusion: The 20-week blended multidisciplinary therapy was effective to reduce body weight, anthropometric circumferences and improve inflammatory markers in obese adolescents. In addition, our results showed that an increase in inflammatory profile diet is associated with cardiometabolic parameters, suggesting the relevance to stimulate anti-inflammatory diet habits as an effective strategy to treat and control of obesity and related comorbidities. Financial Support: FAPESP (2017/07372-1) and CNPq (409943/2016-9)

Keywords: cardiometabolic risk, inflammatory diet, multidisciplinary therapy, obesity

Procedia PDF Downloads 174
135 Teaching about Justice With Justice: How Using Experiential, Learner Centered Literacy Methodology Enhances Learning of Justice Related Competencies for Young Children

Authors: Bruna Azzari Puga, Richard Roe, Andre Pagani de Souza

Abstract:

abstract outlines a proposed study to examine how and to what extent interactive, experiential, learner centered methodology develops learning of basic civic and democratic competencies among young children. It stems from the Literacy and Law course taught at Georgetown University Law Center in Washington, DC, since 1998. Law students, trained in best literacy practices and legal cases affecting literacy development, read “law related” children’s books and engage in interactive and extension activities with emerging readers. The law students write a monthly journal describing their experiences and a final paper: a conventional paper or a children’s book illuminating some aspect of literacy and law. This proposal is based on the recent adaptation of Literacy and Law to Brazil at Mackenzie Presbyterian University in São Paulo in three forms: first, a course similar to the US model, often conducted jointly online with Brazilian and US law students; second, a similar course that combines readings of children’s literature with activity based learning, with law students from a satellite Mackenzie campus, for young children from a vulnerable community near the city; and third, a course taught by law students at the main Mackenzie campus for 4th grade students at the Mackenzie elementary school, that is wholly activity and discourse based. The workings and outcomes of these courses are well documented by photographs, reports, lesson plans, and law student journals. The authors, faculty who teach the above courses at Mackenzie and Georgetown, observe that literacy, broadly defined as cognitive and expressive development through reading and discourse-based activities, can be influential in developing democratic civic skills, identifiable by explicit civic competencies. For example, children experience justice in the classroom through cooperation, creativity, diversity, fairness, systemic thinking, and appreciation for rules and their purposes. Moreover, the learning of civic skills as well as the literacy skills is enhanced through interactive, learner centered practices in which the learners experience literacy and civic development. This study will develop rubrics for individual and classroom teaching and supervision by examining 1) the children’s books and students diaries of participating law students and 2) the collection of photos and videos of classroom activities, and 3) faculty and supervisor observations and reports. These rubrics, and the lesson plans and activities which are employed to advance the higher levels of performance outcomes, will be useful in training and supervision and in further replication and promotion of this form of teaching and learning. Examples of outcomes include helping, cooperating and participating; appreciation of viewpoint diversity; knowledge and utilization of democratic processes, including due process, advocacy, individual and shared decision making, consensus building, and voting; establishing and valuing appropriate rules and a reasoned approach to conflict resolution. In conclusion, further development and replication of the learner centered literacy and law practices outlined here can lead to improved qualities of democratic teaching and learning supporting mutual respect, positivity, deep learning, and the common good – foundation qualities of a sustainable world.

Keywords: democracy, law, learner-centered, literacy

Procedia PDF Downloads 93
134 Video Analytics on Pedagogy Using Big Data

Authors: Jamuna Loganath

Abstract:

Education is the key to the development of any individual’s personality. Today’s students will be tomorrow’s citizens of the global society. The education of the student is the edifice on which his/her future will be built. Schools therefore should provide an all-round development of students so as to foster a healthy society. The behaviors and the attitude of the students in school play an essential role for the success of the education process. Frequent reports of misbehaviors such as clowning, harassing classmates, verbal insults are becoming common in schools today. If this issue is left unattended, it may develop a negative attitude and increase the delinquent behavior. So, the need of the hour is to find a solution to this problem. To solve this issue, it is important to monitor the students’ behaviors in school and give necessary feedback and mentor them to develop a positive attitude and help them to become a successful grownup. Nevertheless, measuring students’ behavior and attitude is extremely challenging. None of the present technology has proven to be effective in this measurement process because actions, reactions, interactions, response of the students are rarely used in the course of the data due to complexity. The purpose of this proposal is to recommend an effective supervising system after carrying out a feasibility study by measuring the behavior of the Students. This can be achieved by equipping schools with CCTV cameras. These CCTV cameras installed in various schools of the world capture the facial expressions and interactions of the students inside and outside their classroom. The real time raw videos captured from the CCTV can be uploaded to the cloud with the help of a network. The video feeds get scooped into various nodes in the same rack or on the different racks in the same cluster in Hadoop HDFS. The video feeds are converted into small frames and analyzed using various Pattern recognition algorithms and MapReduce algorithm. Then, the video frames are compared with the bench marking database (good behavior). When misbehavior is detected, an alert message can be sent to the counseling department which helps them in mentoring the students. This will help in improving the effectiveness of the education process. As Video feeds come from multiple geographical areas (schools from different parts of the world), BIG DATA helps in real time analysis as it analyzes computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. It also analyzes data that can’t be analyzed by traditional software applications such as RDBMS, OODBMS. It has also proven successful in handling human reactions with ease. Therefore, BIG DATA could certainly play a vital role in handling this issue. Thus, effectiveness of the education process can be enhanced with the help of video analytics using the latest BIG DATA technology.

Keywords: big data, cloud, CCTV, education process

Procedia PDF Downloads 220
133 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs

Authors: Michela Quadrini

Abstract:

Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.

Keywords: chord diagrams, linear chord diagram, equivalence class, topological language

Procedia PDF Downloads 179
132 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 137
131 Bi-Directional Impulse Turbine for Thermo-Acoustic Generator

Authors: A. I. Dovgjallo, A. B. Tsapkova, A. A. Shimanov

Abstract:

The paper is devoted to one of engine types with external heating – a thermoacoustic engine. In thermoacoustic engine heat energy is converted to an acoustic energy. Further, acoustic energy of oscillating gas flow must be converted to mechanical energy and this energy in turn must be converted to electric energy. The most widely used way of transforming acoustic energy to electric one is application of linear generator or usual generator with crank mechanism. In both cases, the piston is used. Main disadvantages of piston use are friction losses, lubrication problems and working fluid pollution which cause decrease of engine power and ecological efficiency. Using of a bidirectional impulse turbine as an energy converter is suggested. The distinctive feature of this kind of turbine is that the shock wave of oscillating gas flow passing through the turbine is reflected and passes through the turbine again in the opposite direction. The direction of turbine rotation does not change in the process. Different types of bidirectional impulse turbines for thermoacoustic engines are analyzed. The Wells turbine is the simplest and least efficient of them. A radial impulse turbine has more complicated design and is more efficient than the Wells turbine. The most appropriate type of impulse turbine was chosen. This type is an axial impulse turbine, which has a simpler design than that of a radial turbine and similar efficiency. The peculiarities of the method of an impulse turbine calculating are discussed. They include changes in gas pressure and velocity as functions of time during the generation of gas oscillating flow shock waves in a thermoacoustic system. In thermoacoustic system pressure constantly changes by a certain law due to acoustic waves generation. Peak values of pressure are amplitude which determines acoustic power. Gas, flowing in thermoacoustic system, periodically changes its direction and its mean velocity is equal to zero but its peak values can be used for bi-directional turbine rotation. In contrast with feed turbine, described turbine operates on un-steady oscillating flows with direction changes which significantly influence the algorithm of its calculation. Calculated power output is 150 W with frequency 12000 r/min and pressure amplitude 1,7 kPa. Then, 3-d modeling and numerical research of impulse turbine was carried out. As a result of numerical modeling, main parameters of the working fluid in turbine were received. On the base of theoretical and numerical data model of impulse turbine was made on 3D printer. Experimental unit was designed for numerical modeling results verification. Acoustic speaker was used as acoustic wave generator. Analysis if the acquired data shows that use of the bi-directional impulse turbine is advisable. By its characteristics as a converter, it is comparable with linear electric generators. But its lifetime cycle will be higher and engine itself will be smaller due to turbine rotation motion.

Keywords: acoustic power, bi-directional pulse turbine, linear alternator, thermoacoustic generator

Procedia PDF Downloads 349