Search results for: maximal data sets
25464 Energy Consumption and GHG Production in Railway and Road Passenger Regional Transport
Authors: Martin Kendra, Tomas Skrucany, Jozef Gnap, Jan Ponicky
Abstract:
Paper deals with the modeling and simulation of energy consumption and GHG production of two different modes of regional passenger transport – road and railway. These two transport modes use the same type of fuel – diesel. Modeling and simulation of the energy consumption in transport is often used due to calculation satisfactory accuracy and cost efficiency. Paper deals with the calculation based on EN standards and information collected from technical information from vehicle producers and characteristics of tracks. Calculation included maximal theoretical capacity of bus and train and real passenger’s measurement from operation. Final energy consumption and GHG production is calculated by using software simulation. In evaluation of the simulation is used system ‘well to wheel’.Keywords: bus, consumption energy, GHG, production, simulation, train
Procedia PDF Downloads 44325463 Training Volume and Myoelectric Responses of Lower Body Muscles with Differing Foam Rolling Periods
Authors: Humberto Miranda, Haroldo G. Santana, Gabriel A. Paz, Vicente P. Lima, Jeffrey M. Willardson
Abstract:
Foam rolling is a practice that has increased in popularity before and after strength training. The purpose of this study was to compare the acute effects of different foam rolling periods for the lower body muscles on subsequent performance (total repetitions and training volume), myoelectric activity and rating of perceived exertion in trained men. Fourteen trained men (26.2 ± 3.2 years, 178 ± 0.04 cm height, 82.2 ± 10 kg weight and body mass index 25.9 ± 3.3kg/m2) volunteered for this study. Four repetition maximum (4-RM) loads were determined for hexagonal bar deadlift and 45º angled leg press during test and retest sessions over two nonconsecutive days. Five experimental protocols were applied in a randomized design, which included: a traditional protocol (control)—a resistance training session without prior foam rolling; or resistance training sessions performed following one (P1), two (P2), three (P3), or four (P4) sets of 30 sec. foam rolling for the lower extremity musculature. Subjects were asked to roll over the medial and lateral aspects of each muscle group with as much pressure as possible. All foam rolling was completed at a cadence of 50 bpm. These procedures were performed on both sides unilaterally as described below. Quadriceps: between the apex of the patella and the ASIS; Hamstring: between the gluteal fold and popliteal fossa; Triceps surae: between popliteal fossa and calcaneus tendon. The resistance training consisted of five sets with 4-RM loads and two-minute rest intervals between sets, and a four-minute rest interval between the hexagonal bar deadlift and the 45º angled leg press. The number of repetitions completed, the myoelectric activity of vastus lateralis (VL), vastus medialis oblique (VMO), semitendinosus (SM) and medial gastrocnemius (GM) were recorded, as well as the rating of perceived exertion for each protocol. There were no differences between the protocols in the total repetitions for the hexagonal bar deadlift (Control - 16.2 ± 5.9; P1 - 16.9 ± 5.5; P2 - 19.2 ± 5.7; P3 - 19.4 ± 5.2; P4 - 17.2 ± 8.2) (p > 0.05) and 45º angled leg press (Control - 23.3 ± 9.7; P1 - 25.9 ± 9.5; P2 - 29.1 ± 13.8; P3 - 28.0 ± 11.7; P4 - 30.2 ± 11.2) exercises. Similar results between protocols were also noted for myoelectric activity (p > 0.05) and rating of perceived exertion (p > 0.05). Therefore, the results of the present study indicated no deleterious effects on performance, myoelectric activity and rating of perceived exertion responses during lower body resistance training.Keywords: self myofascial release, foam rolling, electromyography, resistance training
Procedia PDF Downloads 22525462 Scrutiny and Solving Analytically Nonlinear Differential at Engineering Field of Fluids, Heat, Mass and Wave by New Method AGM
Authors: Mohammadreza Akbari, Sara Akbari, Davood Domiri Ganji, Pooya Solimani, Reza Khalili
Abstract:
As all experts know most of engineering system behavior in practical are nonlinear process (especially heat, fluid and mass, etc.) and analytical solving (no numeric) these problems are difficult, complex and sometimes impossible like (fluids and gas wave, these problems can't solve with numeric method, because of no have boundary condition) accordingly in this symposium we are going to exposure a innovative approach which we have named it Akbari-Ganji's Method or AGM in engineering, that can solve sets of coupled nonlinear differential equations (ODE, PDE) with high accuracy and simple solution and so this issue will be emerged after comparing the achieved solutions by Numerical method (Runge-Kutte 4th) and so compare to other methods such as HPM, ADM,… and exact solutions. Eventually, AGM method will be proved that could be created huge evolution for researchers, professors and students (engineering and basic science) in whole over the world, because of AGM coding system, so by using this software we can analytically solve all complicated linear and nonlinear differential equations, with help of that there is no difficulty for solving nonlinear differential equations(ODE and PDE). In this paper, we investigate and solve 4 types of the nonlinear differential equation with AGM method : 1-Heat and fluid, 2-Unsteady state of nonlinear partial differential, 3-Coupled nonlinear partial differential in wave equation, and 4-Nonlinear integro-differential equation.Keywords: new method AGM, sets of coupled nonlinear equations at engineering field, waves equations, integro-differential, fluid and thermal
Procedia PDF Downloads 54625461 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”
Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy
Abstract:
Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared togetherKeywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network
Procedia PDF Downloads 44625460 The Intensity of Load Experienced by Female Basketball Players during Competitive Games
Authors: Tomas Vencurik, Jiri Nykodym
Abstract:
This study compares the intensity of game load among player positions and between the 1st and the 2nd half of the games. Two guards, three forwards, and three centers (female basketball players) participated in this study. The heart rate (HR) and its development were monitored during two competitive games. Statistically insignificant differences in the intensity of game load were recorded between guards, forwards, and centers below and above 85% of the maximal heart rate (HRmax) and in the mean HR as % of HRmax (87.81±3.79%, 87.02±4.37%, and 88.76±3.54%, respectively). Moreover, when the 1st and the 2nd half of the games were compared in the mean HR (87.89±4.18% vs. 88.14±3.63% of HRmax), no statistical significance was recorded. This information can be useful for coaching staff, to manage and to precisely plan the training process.Keywords: game load, heart rate, player positions, the 1st, the 2nd half of the games
Procedia PDF Downloads 56925459 OILU Tag: A Projective Invariant Fiducial System
Authors: Youssef Chahir, Messaoud Mostefai, Salah Khodja
Abstract:
This paper presents the development of a 2D visual marker, derived from a recent patented work in the field of numbering systems. The proposed fiducial uses a group of projective invariant straight-line patterns, easily detectable and remotely recognizable. Based on an efficient data coding scheme, the developed marker enables producing a large panel of unique real time identifiers with highly distinguishable patterns. The proposed marker Incorporates simultaneously decimal and binary information, making it readable by both humans and machines. This important feature opens up new opportunities for the development of efficient visual human-machine communication and monitoring protocols. Extensive experiment tests validate the robustness of the marker against acquisition and geometric distortions.Keywords: visual markers, projective invariants, distance map, level sets
Procedia PDF Downloads 16325458 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging
Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland
Abstract:
A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography
Procedia PDF Downloads 15625457 Massachusetts Homeschool Policy: An Interpretive Analysis of Homeschool Regulation and Oversight
Authors: Lauren Freed
Abstract:
This research proposal outlines an examination of homeschool oversight in the Massachusetts educational system amid the backdrop of ideological differences between various parties with contributing interests. This mixed methodology study will follow an interpretive policy research approach, involving the use of existing data, surveys, and focus groups. The aim is to capture distinct sets of meanings, values, feelings, and beliefs by principal stakeholders, while exploring the ways in which they/each interact with, interpret, and implement homeschool guidelines set forth by the Massachusetts Supreme Judicial Court Decision Care and Protection of Charles (1987). This analysis will identify and contextualize the attitudes, administrative choices, financial implications, and educational impacts that result from the process and practice of enacting current homeschool oversight policy in Massachusetts. The following question will guide this study: How do districts, homeschooling parents, and Massachusetts Department of Elementary and Secondary Education (DESE) regulate, fund, collect, interpret, implement and report Massachusetts homeschool oversight policy? The resulting analysis will produce a unique and original baseline snapshot of qualitative and quantifiable point-in-time data based on the registered homeschool population in the state of Massachusetts.Keywords: alternative education, homeschooling, home education, home schooling policy
Procedia PDF Downloads 18725456 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments
Authors: Skyler Kim
Abstract:
An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning
Procedia PDF Downloads 18725455 Simulation of X-Ray Tissue Contrast and Dose Optimisation in Radiological Physics to Improve Medical Imaging Students’ Skills
Authors: Peter J. Riley
Abstract:
Medical Imaging students must understand the roles of Photo-electric Absorption (PE) and Compton Scatter (CS) interactions in patients to enable optimal X-ray imaging in clinical practice. A simulator has been developed that shows relative interaction probabilities, color bars for patient dose from PE, % penetration to the detector, and obscuring CS as Peak Kilovoltage (kVp) changes. Additionally, an anthropomorphic chest X-ray image shows the relative tissue contrasts and overlying CS-fog at that kVp, which determine the detectability of a lesion in the image. A series of interactive exercises with MCQs evaluate the student's understanding; the simulation has improved student perception of the need to acquire "sufficient" rather than maximal contrast to enable patient dose reduction at higher kVp.Keywords: patient dose optimization, radiological physics, simulation, tissue contrast
Procedia PDF Downloads 9525454 Variations in the Frequency-Magnitude Distribution with Depth in Kalabsha Area, Aswan, South Egypt
Authors: Ezzat Mohamed El-Amin
Abstract:
Mapping the earthquake-size distribution in various tectonic regimes on a local to regional scale reveals statistically significant variations in the range of at least 0.4 to 2.0 for the b-value in the frequency-magnitude distribution. We map the earthquake frequency–magnitude distribution (b value) as a function of depth in the Reservoir Triggered Seismicity (RTS) region in Kalabsha region, in south Egypt. About 1680 well-located events recorded during 1981–2014 in the Kalabsha region are selected for the analysis. The earthquake data sets are separated in 5 km zones from 0 to 25 km depth. The result shows a systematic decrease in b value up to 12 km followed by an increase. The increase in b value is interpreted to be caused by the presence of fluids. We also investigate the spatial distribution of b value with depth. Significant variations in the b value are detected, with b ranging from b 0.7 to 1.19. Low b value areas at 5 km depth indicate localized high stresses which are favorable for future rupture.Keywords: seismicity, frequency-magnitude, b-value, earthquake
Procedia PDF Downloads 55625453 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data
Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali
Abstract:
The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors
Procedia PDF Downloads 6925452 The Application of Sequence Stratigraphy to the Sajau (Pliocene) Coal Distribution in Berau Basin, Northeast Kalimantan, Indonesia
Authors: Ahmad Helman Hamdani, Diana Putri Hamdiana
Abstract:
The Sajau coal measures of Berau Basin, northeastern Kalimantan were deposited within a range of facies associations spanning a spectrum of settings from fluvial to marine. The transitional to terrestrial coal measures are dominated by siliciclastics, but they also contain three laterally extensive marine bands (mudstone). These bands act as marker horizons that enable correlation between fully marine and terrestrial facies. Examination of this range of facies and their sedimentology has enabled the development of a high-resolution sequence stratigraphic framework. Set against the established backdrop of third-order Sajau transgression, nine fourth-order sequences are recognized. Results show that, in the composite sequences, peat accumulation predominantly correlates in transitional areas with early transgressive sequence sets (TSS) and highstand sequence set (HSS), while in more landward areas it correlates with the middle TSS to late highstand sequence sets (HSS). Differences in peat accumulation regimes within the sequence stratigraphic framework are attributed to variations in subsidence and background siliciclastic input rates in different depositional settings, with these combining to produce differences in the rate of accommodation change. The preservation of coal resources in the middle to late HSS in this area was most likely related to the rise of the regional base level throughout the Sajau.Keywords: sequence stratigraphy, coal, Pliocene, Berau basin
Procedia PDF Downloads 46625451 Identifying Families in C-SPAN’s: U.S. Presidential Ratings: 2000, 2009, and 2017
Authors: Alexander Cramer, Kenneth Cramer
Abstract:
Since the inauguration of President George Washington in 1789, the United States of America has seen the governance of some 44 individual presidents. Although such presidents share a variety of attributes, they still differ from one another on many others. Significantly, these traits may be used to construct distinct sets of 'families' of presidents throughout American history. By comparatively analyzing data from experts on the U.S. presidency – in this case, the C-SPAN Presidential Historians Surveys from 2000, 2009, and 2017 – this article identifies a consistent set of six presidential families: the All Stars; the Conservative Visionaries; the Postwar Progressives; the Average Joes; the Forgettables; and the Regrettables. In situating these categories in history, this article argues that U.S. presidents can be accurately organized into cohesive, like-performing families whose constituents share a common set of criteria.Keywords: C-SPAN, POTUS presidential performance, presidential ranking, presidential studies, presidential surveys, United States
Procedia PDF Downloads 19525450 Relation of Electromyography, Strength and Fatigue During Ramp Isometric Contractions
Authors: Cesar Ferreira Amorim, Tamotsu Hirata, Runer Augusto Marson
Abstract:
The purpose of this study was to determine the effect of strength ramp isometric contraction on changes in surface electromyography (sEMG) signal characteristics of the hamstrings muscles. All measurements were obtained from 20 healthy well trained healthy adults (age 19.5 ± 0.8 yrs, body mass 63.4 ± 1.5 kg, height: 1.65 ± 0.05 m). Subjects had to perform isometric ramp contractions in knee flexion with the force gradually increasing from 0 to 40% of the maximal voluntary contraction (MVC) in a 20s period. The root mean square (RMS) amplitude of sEMG signals obtained from the biceps femoris (caput longum) were calculated at four different strength levels (10, 20, 30, and 40% MVC) from the ramp isometric contractions (5s during the 20s task %MVC). The main results were a more pronounced increase non-linear in sEMG-RMS amplitude for the muscles. The protocol described here may provide a useful index for measuring of strength neuromuscular fatigue.Keywords: biosignal, surface electromyography, ramp contractions, strength
Procedia PDF Downloads 48325449 JavaScript Object Notation Data against eXtensible Markup Language Data in Software Applications a Software Testing Approach
Authors: Theertha Chandroth
Abstract:
This paper presents a comparative study on how to check JSON (JavaScript Object Notation) data against XML (eXtensible Markup Language) data from a software testing point of view. JSON and XML are widely used data interchange formats, each with its unique syntax and structure. The objective is to explore various techniques and methodologies for validating comparison and integration between JSON data to XML and vice versa. By understanding the process of checking JSON data against XML data, testers, developers and data practitioners can ensure accurate data representation, seamless data interchange, and effective data validation.Keywords: XML, JSON, data comparison, integration testing, Python, SQL
Procedia PDF Downloads 14025448 Using Machine Learning Techniques to Extract Useful Information from Dark Data
Authors: Nigar Hussain
Abstract:
It is a subset of big data. Dark data means those data in which we fail to use for future decisions. There are many issues in existing work, but some need powerful tools for utilizing dark data. It needs sufficient techniques to deal with dark data. That enables users to exploit their excellence, adaptability, speed, less time utilization, execution, and accessibility. Another issue is the way to utilize dark data to extract helpful information to settle on better choices. In this paper, we proposed upgrade strategies to remove the dark side from dark data. Using a supervised model and machine learning techniques, we utilized dark data and achieved an F1 score of 89.48%.Keywords: big data, dark data, machine learning, heatmap, random forest
Procedia PDF Downloads 2825447 Solving Flowshop Scheduling Problems with Ant Colony Optimization Heuristic
Authors: Arshad Mehmood Ch, Riaz Ahmad, Imran Ali Ch, Waqas Durrani
Abstract:
This study deals with the application of Ant Colony Optimization (ACO) approach to solve no-wait flowshop scheduling problem (NW-FSSP). ACO algorithm so developed has been coded on Matlab computer application. The paper covers detailed steps to apply ACO and focuses on judging the strength of ACO in relation to other solution techniques previously applied to solve no-wait flowshop problem. The general purpose approach was able to find reasonably accurate solutions for almost all the problems under consideration and was able to handle a fairly large spectrum of problems with far reduced CPU effort. Careful scrutiny of the results reveals that the algorithm presented results better than other approaches like Genetic algorithm and Tabu Search heuristics etc; earlier applied to solve NW-FSSP data sets.Keywords: no-wait, flowshop, scheduling, ant colony optimization (ACO), makespan
Procedia PDF Downloads 43325446 Distributional and Developmental Analysis of PM2.5 in Beijing, China
Authors: Alexander K. Guo
Abstract:
PM2.5 poses a large threat to people’s health and the environment and is an issue of large concern in Beijing, brought to the attention of the government by the media. In addition, both the United States Embassy in Beijing and the government of China have increased monitoring of PM2.5 in recent years, and have made real-time data available to the public. This report utilizes hourly historical data (2008-2016) from the U.S. Embassy in Beijing for the first time. The first objective was to attempt to fit probability distributions to the data to better predict a number of days exceeding the standard, and the second was to uncover any yearly, seasonal, monthly, daily, and hourly patterns and trends that may arise to better understand of air control policy. In these data, 66,650 hours and 2687 days provided valid data. Lognormal, gamma, and Weibull distributions were fit to the data through an estimation of parameters. The Chi-squared test was employed to compare the actual data with the fitted distributions. The data were used to uncover trends, patterns, and improvements in PM2.5 concentration over the period of time with valid data in addition to specific periods of time that received large amounts of media attention, analyzed to gain a better understanding of causes of air pollution. The data show a clear indication that Beijing’s air quality is unhealthy, with an average of 94.07µg/m3 across all 66,650 hours with valid data. It was found that no distribution fit the entire dataset of all 2687 days well, but each of the three above distribution types was optimal in at least one of the yearly data sets, with the lognormal distribution found to fit recent years better. An improvement in air quality beginning in 2014 was discovered, with the first five months of 2016 reporting an average PM2.5 concentration that is 23.8% lower than the average of the same period in all years, perhaps the result of various new pollution-control policies. It was also found that the winter and fall months contained more days in both good and extremely polluted categories, leading to a higher average but a comparable median in these months. Additionally, the evening hours, especially in the winter, reported much higher PM2.5 concentrations than the afternoon hours, possibly due to the prohibition of trucks in the city in the daytime and the increased use of coal for heating in the colder months when residents are home in the evening. Lastly, through analysis of special intervals that attracted media attention for either unnaturally good or bad air quality, the government’s temporary pollution control measures, such as more intensive road-space rationing and factory closures, are shown to be effective. In summary, air quality in Beijing is improving steadily and do follow standard probability distributions to an extent, but still needs improvement. Analysis will be updated when new data become available.Keywords: Beijing, distribution, patterns, pm2.5, trends
Procedia PDF Downloads 24525445 Upon One Smoothing Problem in Project Management
Authors: Dimitri Golenko-Ginzburg
Abstract:
A CPM network project with deterministic activity durations, in which activities require homogenous resources with fixed capacities, is considered. The problem is to determine the optimal schedule of starting times for all network activities within their maximal allowable limits (in order not to exceed the network's critical time) to minimize the maximum required resources for the project at any point in time. In case when a non-critical activity may start only at discrete moments with the pregiven time span, the problem becomes NP-complete and an optimal solution may be obtained via a look-over algorithm. For the case when a look-over requires much computational time an approximate algorithm is suggested. The algorithm's performance ratio, i.e., the relative accuracy error, is determined. Experimentation has been undertaken to verify the suggested algorithm.Keywords: resource smoothing problem, CPM network, lookover algorithm, lexicographical order, approximate algorithm, accuracy estimate
Procedia PDF Downloads 30225444 Performance Analysis and Optimization for Diagonal Sparse Matrix-Vector Multiplication on Machine Learning Unit
Authors: Qiuyu Dai, Haochong Zhang, Xiangrong Liu
Abstract:
Diagonal sparse matrix-vector multiplication is a well-studied topic in the fields of scientific computing and big data processing. However, when diagonal sparse matrices are stored in DIA format, there can be a significant number of padded zero elements and scattered points, which can lead to a degradation in the performance of the current DIA kernel. This can also lead to excessive consumption of computational and memory resources. In order to address these issues, the authors propose the DIA-Adaptive scheme and its kernel, which leverages the parallel instruction sets on MLU. The researchers analyze the effect of allocating a varying number of threads, clusters, and hardware architectures on the performance of SpMV using different formats. The experimental results indicate that the proposed DIA-Adaptive scheme performs well and offers excellent parallelism.Keywords: adaptive method, DIA, diagonal sparse matrices, MLU, sparse matrix-vector multiplication
Procedia PDF Downloads 13425443 Constructing Orthogonal De Bruijn and Kautz Sequences and Applications
Authors: Yaw-Ling Lin
Abstract:
A de Bruijn graph of order k is a graph whose vertices representing all length-k sequences with edges joining pairs of vertices whose sequences have maximum possible overlap (length k−1). Every Hamiltonian cycle of this graph defines a distinct, minimum length de Bruijn sequence containing all k-mers exactly once. A Kautz sequence is the minimal generating sequence so as the sequence of minimal length that produces all possible length-k sequences with the restriction that every two consecutive alphabets in the sequences must be different. A collection of de Bruijn/Kautz sequences are orthogonal if any two sequences are of maximally differ in sequence composition; that is, the maximum length of their common substring is k. In this paper, we discuss how such a collection of (maximal) orthogonal de Bruijn/Kautz sequences can be made and use the algorithm to build up a web application service for the synthesized DNA and other related biomolecular sequences.Keywords: biomolecular sequence synthesis, de Bruijn sequences, Eulerian cycle, Hamiltonian cycle, Kautz sequences, orthogonal sequences
Procedia PDF Downloads 16625442 Using Collaborative Planning to Develop a Guideline for Integrating Biodiversity into Land Use Schemes
Authors: Sagwata A. Manyike, Hulisani Magada
Abstract:
The South African National Biodiversity Institute is in the process of developing a guideline which sets out how biodiversity can be incorporated into land use (zoning) schemes. South Africa promulgated its Spatial Planning and Land Use Management Act in 2015 and the act seeks, amongst other things, to bridge the gap between spatial planning and land use management within the country. In addition, the act requires local governments to develop wall-to-wall land use schemes for their entire jurisdictions as they had previously only developed them for their urban areas. At the same time, South Africa has a rich history of systematic conservation planning whereby Critical Biodiversity Areas and Ecological Support Areas have been spatially delineated at a scale appropriate for spatial planning and land use management at the scale of local government. South Africa is also in the process of spatially delineating ecological infrastructure which is defined as naturally occurring ecosystems which provide valuable services to people such as water and climate regulation, soil formation, disaster risk reduction, etc. The Biodiversity and Land Use Project, which is funded by the Global Environmental Facility through the United Nations Development Programme is seeking to explore ways in which biodiversity information and ecological infrastructure can be incorporated into the spatial planning and land use management systems of local governments. Towards this end, the Biodiversity and Land Use Project have developed a guideline which sets out how local governments can integrate biodiversity into their land-use schemes as a way of not only ensuring sustainable development but also as a way helping them prepare for climate change. In addition, by incorporating biodiversity into land-use schemes, the project is exploring new ways of protecting biodiversity through land use schemes. The Guideline for Incorporating Biodiversity into Land Use Schemes was developed as a response to the fact that the National Land Use Scheme Guidelines only indicates that local governments needed to incorporate biodiversity without explaining how this could be achieved. The Natioanl Guideline also failed to specify which biodiversity-related layers are compatible with which land uses or what the benefits of incorporating biodiversity into the schemes will be for that local government. The guideline, therefore, sets out an argument for why biodiversity is important in land management processes and proceeds to provide a step by step guideline for how schemes can integrate priority biodiversity layers. This guideline will further be added as an addendum to the National Land Use Guidelines. Although the planning act calls for local government to have wall to wall schemes within 5 years of its enactment, many municipalities will not meet this deadline and so this guideline will support them in the development of their new schemes.Keywords: biodiversity, climate change, land use schemes, local government
Procedia PDF Downloads 17725441 A Quadratic Model to Early Predict the Blastocyst Stage with a Time Lapse Incubator
Authors: Cecile Edel, Sandrine Giscard D'Estaing, Elsa Labrune, Jacqueline Lornage, Mehdi Benchaib
Abstract:
Introduction: The use of incubator equipped with time-lapse technology in Artificial Reproductive Technology (ART) allows a continuous surveillance. With morphocinetic parameters, algorithms are available to predict the potential outcome of an embryo. However, the different proposed time-lapse algorithms do not take account the missing data, and then some embryos could not be classified. The aim of this work is to construct a predictive model even in the case of missing data. Materials and methods: Patients: A retrospective study was performed, in biology laboratory of reproduction at the hospital ‘Femme Mère Enfant’ (Lyon, France) between 1 May 2013 and 30 April 2015. Embryos (n= 557) obtained from couples (n=108) were cultured in a time-lapse incubator (Embryoscope®, Vitrolife, Goteborg, Sweden). Time-lapse incubator: The morphocinetic parameters obtained during the three first days of embryo life were used to build the predictive model. Predictive model: A quadratic regression was performed between the number of cells and time. N = a. T² + b. T + c. N: number of cells at T time (T in hours). The regression coefficients were calculated with Excel software (Microsoft, Redmond, WA, USA), a program with Visual Basic for Application (VBA) (Microsoft) was written for this purpose. The quadratic equation was used to find a value that allows to predict the blastocyst formation: the synthetize value. The area under the curve (AUC) obtained from the ROC curve was used to appreciate the performance of the regression coefficients and the synthetize value. A cut-off value has been calculated for each regression coefficient and for the synthetize value to obtain two groups where the difference of blastocyst formation rate according to the cut-off values was maximal. The data were analyzed with SPSS (IBM, Il, Chicago, USA). Results: Among the 557 embryos, 79.7% had reached the blastocyst stage. The synthetize value corresponds to the value calculated with time value equal to 99, the highest AUC was then obtained. The AUC for regression coefficient ‘a’ was 0.648 (p < 0.001), 0.363 (p < 0.001) for the regression coefficient ‘b’, 0.633 (p < 0.001) for the regression coefficient ‘c’, and 0.659 (p < 0.001) for the synthetize value. The results are presented as follow: blastocyst formation rate under cut-off value versus blastocyst rate formation above cut-off value. For the regression coefficient ‘a’ the optimum cut-off value was -1.14.10-3 (61.3% versus 84.3%, p < 0.001), 0.26 for the regression coefficient ‘b’ (83.9% versus 63.1%, p < 0.001), -4.4 for the regression coefficient ‘c’ (62.2% versus 83.1%, p < 0.001) and 8.89 for the synthetize value (58.6% versus 85.0%, p < 0.001). Conclusion: This quadratic regression allows to predict the outcome of an embryo even in case of missing data. Three regression coefficients and a synthetize value could represent the identity card of an embryo. ‘a’ regression coefficient represents the acceleration of cells division, ‘b’ regression coefficient represents the speed of cell division. We could hypothesize that ‘c’ regression coefficient could represent the intrinsic potential of an embryo. This intrinsic potential could be dependent from oocyte originating the embryo. These hypotheses should be confirmed by studies analyzing relationship between regression coefficients and ART parameters.Keywords: ART procedure, blastocyst formation, time-lapse incubator, quadratic model
Procedia PDF Downloads 30625440 Multi-Source Data Fusion for Urban Comprehensive Management
Authors: Bolin Hua
Abstract:
In city governance, various data are involved, including city component data, demographic data, housing data and all kinds of business data. These data reflects different aspects of people, events and activities. Data generated from various systems are different in form and data source are different because they may come from different sectors. In order to reflect one or several facets of an event or rule, data from multiple sources need fusion together. Data from different sources using different ways of collection raised several issues which need to be resolved. Problem of data fusion include data update and synchronization, data exchange and sharing, file parsing and entry, duplicate data and its comparison, resource catalogue construction. Governments adopt statistical analysis, time series analysis, extrapolation, monitoring analysis, value mining, scenario prediction in order to achieve pattern discovery, law verification, root cause analysis and public opinion monitoring. The result of Multi-source data fusion is to form a uniform central database, which includes people data, location data, object data, and institution data, business data and space data. We need to use meta data to be referred to and read when application needs to access, manipulate and display the data. A uniform meta data management ensures effectiveness and consistency of data in the process of data exchange, data modeling, data cleansing, data loading, data storing, data analysis, data search and data delivery.Keywords: multi-source data fusion, urban comprehensive management, information fusion, government data
Procedia PDF Downloads 39325439 Integrating and Evaluating Computational Thinking in an Undergraduate Marine Science Course
Authors: Dana Christensen
Abstract:
Undergraduate students, particularly in the environmental sciences, have difficulty displaying quantitative skills in their laboratory courses. Students spend time sampling in the field, often using new methods, and are expected to make sense of the data they collect. Computational thinking may be used to navigate these new experiences. We developed a curriculum for the marine science department at a small liberal arts college in the Northeastern United States based on previous computational thinking frameworks. This curriculum incorporates marine science data sets with specific objectives and topics selected by the faculty at the College. The curriculum was distributed to all students enrolled in introductory marine science classes as a mandatory module. Two pre-tests and post-tests will be used to quantitatively assess student progress on both content-based and computational principles. Student artifacts are being collected with each lesson to be coded for content-specific and computational-specific items in qualitative assessment. There is an overall gap in marine science education research, especially curricula that focus on computational thinking and associated quantitative assessment. The curricula itself, the assessments, and our results may be modified and applied to other environmental science courses due to the nature of the inquiry-based laboratory components that use quantitative skills to understand nature.Keywords: marine science, computational thinking, curriculum assessment, quantitative skills
Procedia PDF Downloads 5925438 Reviewing Privacy Preserving Distributed Data Mining
Authors: Sajjad Baghernezhad, Saeideh Baghernezhad
Abstract:
Nowadays considering human involved in increasing data development some methods such as data mining to extract science are unavoidable. One of the discussions of data mining is inherent distribution of the data usually the bases creating or receiving such data belong to corporate or non-corporate persons and do not give their information freely to others. Yet there is no guarantee to enable someone to mine special data without entering in the owner’s privacy. Sending data and then gathering them by each vertical or horizontal software depends on the type of their preserving type and also executed to improve data privacy. In this study it was attempted to compare comprehensively preserving data methods; also general methods such as random data, coding and strong and weak points of each one are examined.Keywords: data mining, distributed data mining, privacy protection, privacy preserving
Procedia PDF Downloads 52525437 Improving Comfort and Energy Mastery: Application of a Method Based on Indicators Morpho-Energetic
Authors: Khadidja Rahmani, Nahla Bouaziz
Abstract:
The climate change and the economic crisis, which are currently running, are the origin of the emergence of many issues and problems, which are related to the domain of energy and environment in à direct or indirect manner. Since the urban space is the core element and the key to solve the current problem, particular attention is given to it in this study. For this reason, we rented to the later a very particular attention; this is for the opportunities that it provides and that can be invested to attenuate a little this situation, which is disastrous and worried, especially in the face of the requirements of sustainable development. Indeed, the purpose of this work is to develop a method, which will allow us to guide designers towards projects with a certain degree of thermo-aeraulic comfort while requiring a minimum energy consumption. In this context, the architects, the urban planners and the engineers (energeticians) have to collaborate jointly to establish a method based on indicators for the improvement of the urban environmental quality (aeraulic-thermo comfort), correlated with a reduction in the energy demand of the entities that make up this environment, in areas with a sub-humid climate. In order to test the feasibility and to validate the method developed in this work, we carried out a series of simulations using computer-based simulation. This research allows us to evaluate the impact of the use of the indicators in the design of the urban sets, on the economic and ecological plan. Using this method, we prove that an urban design, which carefully considered energetically, can contribute significantly to the preservation of the environment and the reduction of the consumption of energy.Keywords: comfort, energy consumption, energy mastery, morpho-energetic indicators, simulation, sub-humid climate, urban sets
Procedia PDF Downloads 27525436 Range Suitability Model for Livestock Grazing in Taleghan Rangelands
Authors: Hossein Arzani, Masoud Jafari Shalamzari, Z. Arzani
Abstract:
This paper follows FAO model of suitability analysis. Influential factors affecting extensive grazing were determined and converted into a model. Taleghan rangelands were examined for common types of grazing animals as an example. Advantages and limitations were elicited. All range ecosystems’ components affect range suitability but due to the time and money restrictions, the most important and feasible elements were investigated. From which three sub-models including water accessibility, forage production and erosion sensitivity were considered. Suitable areas in four levels of suitability were calculated using GIS. This suitability modeling approach was adopted due to its simplicity and the minimal time that is required for transforming and analyzing the data sets. Managers could be benefited from the model to devise the measures more wisely to cope with the limitations and enhance the rangelands health and condition.Keywords: range suitability, land-use, extensive grazing, modeling, land evaluation
Procedia PDF Downloads 34125435 The Right to Data Portability and Its Influence on the Development of Digital Services
Authors: Roman Bieda
Abstract:
The General Data Protection Regulation (GDPR) will come into force on 25 May 2018 which will create a new legal framework for the protection of personal data in the European Union. Article 20 of GDPR introduces a right to data portability. This right allows for data subjects to receive the personal data which they have provided to a data controller, in a structured, commonly used and machine-readable format, and to transmit this data to another data controller. The right to data portability, by facilitating transferring personal data between IT environments (e.g.: applications), will also facilitate changing the provider of services (e.g. changing a bank or a cloud computing service provider). Therefore, it will contribute to the development of competition and the digital market. The aim of this paper is to discuss the right to data portability and its influence on the development of new digital services.Keywords: data portability, digital market, GDPR, personal data
Procedia PDF Downloads 473