Search results for: heterogeneous massive data
25860 A Model Architecture Transformation with Approach by Modeling: From UML to Multidimensional Schemas of Data Warehouses
Authors: Ouzayr Rabhi, Ibtissam Arrassen
Abstract:
To provide a complete analysis of the organization and to help decision-making, leaders need to have relevant data; Data Warehouses (DW) are designed to meet such needs. However, designing DW is not trivial and there is no formal method to derive a multidimensional schema from heterogeneous databases. In this article, we present a Model-Driven based approach concerning the design of data warehouses. We describe a multidimensional meta-model and also specify a set of transformations starting from a Unified Modeling Language (UML) metamodel. In this approach, the UML metamodel and the multidimensional one are both considered as a platform-independent model (PIM). The first meta-model is mapped into the second one through transformation rules carried out by the Query View Transformation (QVT) language. This proposal is validated through the application of our approach to generating a multidimensional schema of a Balanced Scorecard (BSC) DW. We are interested in the BSC perspectives, which are highly linked to the vision and the strategies of an organization.Keywords: data warehouse, meta-model, model-driven architecture, transformation, UML
Procedia PDF Downloads 16025859 Jejunostomy and Protective Ileostomy in a Patient with Massive Necrotizing Enterocolitis: A Case Report
Authors: Rafael Ricieri, Rogerio Barros
Abstract:
Objective: This study is to report a case of massive necrotizing enterocolitis in a six-month-old patient, requiring ileostomy and protective jejunostomy as a damage control measure in the first exploratory laparotomy surgery in massive enterocolitis without a previous diagnosis. Methods: This study is a case report of success in making and closing a protective jejunostomy. However, the low number of publications on this staged and risky measure of surgical resolution encouraged the team to study the indication and especially the correct time for closing the patient's protective jejunostomy. The main study instrument will be the six-month-old patient's medical record. Results: Based on the observation of the case described, it was observed that the time for the closure of the described procedure (protective jejunostomy) varies according to the level of compromise of the health status of your patient and of an individual of each person. Early closure, or failure to close, can lead to a favorable problem for the patient since several problems can result from this closure, such as new intestinal perforations, hydroelectrolyte disturbances. Despite the risk of new perforations, we suggest closing the protective jejunostomy around the 14th day of the procedure, thus keeping the patient on broad-spectrum antibiotic therapy and absolute fasting, thus reducing the chances of new intestinal perforations. Associated with the closure of the jejunostomy, a gastric tube for decompression is necessary, and care in an intensive care unit and electrolyte replacement is necessary to maintain the stability of the case.Keywords: jejunostomy, ileostomy, enterocolitis, pediatric surgery, gastric surgery
Procedia PDF Downloads 8425858 Ultrasonic Degradation of Acephate in Aqueous Solution: Effects of Operating Parameters
Authors: Naina S. Deshmukh, Manik P. Deosarkar
Abstract:
With the wide production, consumption, and disposal of pesticides in the world, the concerns over their human and environmental health impacts are rapidly growing. Among developing treatment technologies, ultrasonication, as an emerging and promising technology for the removal of pesticides in the aqueous environment, has attracted the attention of many researchers in recent years. The degradation of acephate in aqueous solutions was investigated under the influence of ultrasound irradiation (20 kHz) in the presence of heterogeneous catalysts titanium dioxide (TiO2) and Zinc oxide (ZnO). The influence of various factors such as amount of catalyst (0.25, 0.5, 0.75, 1.0, 1.25 g/l), initial acephate concentration (100, 200, 300, 400 mg/l), and pH (3, 5, 7, 9, 11) were studied. The optimum catalyst dose was found to be 1 g/l of TiO2 and 1.25 g/l of ZnO for acephate at 100 mg/l, respectively. The maximum percentage degradation of acephate was observed at pH 11 for catalyst TiO2 and ZnO, respectively.Keywords: ultrasonic degradation, acephate, TiO2, ZnO, heterogeneous catalyst
Procedia PDF Downloads 10225857 New Two-Way Map-Reduce Join Algorithm: Hash Semi Join
Authors: Marwa Hussein Mohamed, Mohamed Helmy Khafagy, Samah Ahmed Senbel
Abstract:
Map Reduce is a programming model used to handle and support massive data sets. Rapidly increasing in data size and big data are the most important issue today to make an analysis of this data. map reduce is used to analyze data and get more helpful information by using two simple functions map and reduce it's only written by the programmer, and it includes load balancing , fault tolerance and high scalability. The most important operation in data analysis are join, but map reduce is not directly support join. This paper explains two-way map-reduce join algorithm, semi-join and per split semi-join, and proposes new algorithm hash semi-join that used hash table to increase performance by eliminating unused records as early as possible and apply join using hash table rather than using map function to match join key with other data table in the second phase but using hash tables isn't affecting on memory size because we only save matched records from the second table only. Our experimental result shows that using a hash table with hash semi-join algorithm has higher performance than two other algorithms while increasing the data size from 10 million records to 500 million and running time are increased according to the size of joined records between two tables.Keywords: map reduce, hadoop, semi join, two way join
Procedia PDF Downloads 51325856 From Two-Way to Multi-Way: A Comparative Study for Map-Reduce Join Algorithms
Authors: Marwa Hussien Mohamed, Mohamed Helmy Khafagy
Abstract:
Map-Reduce is a programming model which is widely used to extract valuable information from enormous volumes of data. Map-reduce designed to support heterogeneous datasets. Apache Hadoop map-reduce used extensively to uncover hidden pattern like data mining, SQL, etc. The most important operation for data analysis is joining operation. But, map-reduce framework does not directly support join algorithm. This paper explains and compares two-way and multi-way map-reduce join algorithms for map reduce also we implement MR join Algorithms and show the performance of each phase in MR join algorithms. Our experimental results show that map side join and map merge join in two-way join algorithms has the longest time according to preprocessing step sorting data and reduce side cascade join has the longest time at Multi-Way join algorithms.Keywords: Hadoop, MapReduce, multi-way join, two-way join, Ubuntu
Procedia PDF Downloads 48725855 Social Interaction of Gifted Students in a Heterogeneous Educational Environment
Authors: Ekaterina Donii
Abstract:
Understanding interpersonal competence, social interaction and peer relationships of gifted children is a concern for specialists in the field of gifted education. To gain more in-depth knowledge concerning the social functioning of gifted children among peers, we decided to study the social abilities of gifted children in a heterogeneous academic environment. Eight gifted children (5 of age 7, 1 of age 8.5, 1 of age 9.5 and 1 of age 10), their classmates (10 of age 7-8, 12 of age 8.5-9, 16 of age 9.5-10) and teachers participated in the study. The sociometric questionnaire analysis was based on the method of Rodríguez and Morera to check the social status of the gifted children among classmates. The Instrument Observational Protocol for Interactions within the Classroom (OPINTEC-v.5) was used to assess the social interactions between the gifted students, their classmates, and the teacher within the educational context. While doing a task together, the gifted children interacted more with popular and neither popular nor gifted classmates than with rejected classmates. While spending time together, the gifted children interacted more with neither popular nor rejected classmates than with popular or rejected classmates. All gifted children chose other gifted and non-gifted classmates for interaction, established close relations and demonstrated good social abilities interacting with their classmates. The aim of this study was to examine the social interactions, social status, and social network of the gifted students in a regular classroom. The majority of the gifted children were popular among their classmates and had good social skills. We should be alert, though, for those gifted children who do have social problems, in order to help them functioning in a regular classroom.Keywords: gifted, heterogeneous environment, sociometric status, social interactions
Procedia PDF Downloads 35625854 Digital Joint Equivalent Channel Hybrid Precoding for Millimeterwave Massive Multiple Input Multiple Output Systems
Authors: Linyu Wang, Mingjun Zhu, Jianhong Xiang, Hanyu Jiang
Abstract:
Aiming at the problem that the spectral efficiency of hybrid precoding (HP) is too low in the current millimeter wave (mmWave) massive multiple input multiple output (MIMO) system, this paper proposes a digital joint equivalent channel hybrid precoding algorithm, which is based on the introduction of digital encoding matrix iteration. First, the objective function is expanded to obtain the relation equation, and the pseudo-inverse iterative function of the analog encoder is derived by using the pseudo-inverse method, which solves the problem of greatly increasing the amount of computation caused by the lack of rank of the digital encoding matrix and reduces the overall complexity of hybrid precoding. Secondly, the analog coding matrix and the millimeter-wave sparse channel matrix are combined into an equivalent channel, and then the equivalent channel is subjected to Singular Value Decomposition (SVD) to obtain a digital coding matrix, and then the derived pseudo-inverse iterative function is used to iteratively regenerate the simulated encoding matrix. The simulation results show that the proposed algorithm improves the system spectral efficiency by 10~20%compared with other algorithms and the stability is also improved.Keywords: mmWave, massive MIMO, hybrid precoding, singular value decompositing, equivalent channel
Procedia PDF Downloads 9625853 Continual Learning Using Data Generation for Hyperspectral Remote Sensing Scene Classification
Authors: Samiah Alammari, Nassim Ammour
Abstract:
When providing a massive number of tasks successively to a deep learning process, a good performance of the model requires preserving the previous tasks data to retrain the model for each upcoming classification. Otherwise, the model performs poorly due to the catastrophic forgetting phenomenon. To overcome this shortcoming, we developed a successful continual learning deep model for remote sensing hyperspectral image regions classification. The proposed neural network architecture encapsulates two trainable subnetworks. The first module adapts its weights by minimizing the discrimination error between the land-cover classes during the new task learning, and the second module tries to learn how to replicate the data of the previous tasks by discovering the latent data structure of the new task dataset. We conduct experiments on HSI dataset Indian Pines. The results confirm the capability of the proposed method.Keywords: continual learning, data reconstruction, remote sensing, hyperspectral image segmentation
Procedia PDF Downloads 26625852 Boundary Conditions for 2D Site Response Analysis in OpenSees
Authors: M. Eskandarighadi, C. R. McGann
Abstract:
It is observed from past experiences of earthquakes that local site conditions can significantly affect the strong ground motion characteristicssuch as frequency content, amplitude, and duration of seismic waves. The most common method for investigating site response is one-dimensional seismic site response analysis. The infinite horizontal length of the model and the homogeneous characteristic of the soil are crucial assumptions of this method. One boundary condition that can be used in the sides is tying the sides horizontally for vertical 1D wave propagation. However, 1D analysis cannot account for the 2D nature of wave propagation in the condition where the soil profile is not fully horizontal or has heterogeneity within layers. Therefore, 2D seismic site response analysis can be used to take all of these limitations into account for a better understanding of local site conditions. Different types of boundary conditions can be appliedin 2D site response models, such as tied boundary condition, massive columns, and free-field boundary condition. The tied boundary condition has been used in 1D analysis, which is useful for 1D wave propagation. Employing two massive columns at the sides is another approach for capturing the 2D nature of wave propagation. Free-field boundary condition can simulate the free-field motion that would exist far from the domain of interest. The goal for free-field boundary condition is to minimize the unwanted reflection from sides. This research focuses on the comparison between these methods with examples and discusses the details and limitations of each of these boundary conditions.Keywords: boundary condition, free-field, massive columns, opensees, site response analysis, wave propagation
Procedia PDF Downloads 18325851 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis
Authors: C. B. Le, V. N. Pham
Abstract:
In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering
Procedia PDF Downloads 18925850 Supercritical Methanol for Biodiesel Production from Jatropha Oil in the Presence of Heterogeneous Catalysts
Authors: Velid Demir, Mesut Akgün
Abstract:
The lanthanum and zinc oxide were synthesized and then loaded with 6 wt% over γ-Al₂O₃ using the wet impregnation method. The samples were calcined at 900 °C to ensure a coherent structure with high catalytic performance. Characterization of the catalysts was verified by X-ray diffraction (XRD) and Fourier-transform infrared spectroscopy (FT-IR). The effect of catalysts on biodiesel content from jatropha oil was studied under supercritical conditions. The results showed that ZnO/γ-Al₂O₃ was the superior catalyst for jatropha oil with 98.05% biodiesel under reaction conditions of 7 min reaction time, 1:40 oil to methanol molar ratio, 6 wt% of catalyst loading, 90 bar of reaction pressure, and 300 °C of reaction temperature, compared to 95.50% with La₂O₃/γ-Al₂O₃ at the same parameters. For this study, ZnO/γ-Al₂O₃ was the most suitable catalyst due to performance and cost considerations.Keywords: biodiesel, heterogeneous catalyst, jatropha oil, supercritical methanol, transesterification
Procedia PDF Downloads 8825849 DeepOmics: Deep Learning for Understanding Genome Functioning and the Underlying Genetic Causes of Disease
Authors: Vishnu Pratap Singh Kirar, Madhuri Saxena
Abstract:
Advancement in sequence data generation technologies is churning out voluminous omics data and posing a massive challenge to annotate the biological functional features. With so much data available, the use of machine learning methods and tools to make novel inferences has become obvious. Machine learning methods have been successfully applied to a lot of disciplines, including computational biology and bioinformatics. Researchers in computational biology are interested to develop novel machine learning frameworks to classify the huge amounts of biological data. In this proposal, it plan to employ novel machine learning approaches to aid the understanding of how apparently innocuous mutations (in intergenic DNA and at synonymous sites) cause diseases. We are also interested in discovering novel functional sites in the genome and mutations in which can affect a phenotype of interest.Keywords: genome wide association studies (GWAS), next generation sequencing (NGS), deep learning, omics
Procedia PDF Downloads 9725848 Model Observability – A Monitoring Solution for Machine Learning Models
Authors: Amreth Chandrasehar
Abstract:
Machine Learning (ML) Models are developed and run in production to solve various use cases that help organizations to be more efficient and help drive the business. But this comes at a massive development cost and lost business opportunities. According to the Gartner report, 85% of data science projects fail, and one of the factors impacting this is not paying attention to Model Observability. Model Observability helps the developers and operators to pinpoint the model performance issues data drift and help identify root cause of issues. This paper focuses on providing insights into incorporating model observability in model development and operationalizing it in production.Keywords: model observability, monitoring, drift detection, ML observability platform
Procedia PDF Downloads 11225847 Real-Time Sensor Fusion for Mobile Robot Localization in an Oil and Gas Refinery
Authors: Adewole A. Ayoade, Marshall R. Sweatt, John P. H. Steele, Qi Han, Khaled Al-Wahedi, Hamad Karki, William A. Yearsley
Abstract:
Understanding the behavioral characteristics of sensors is a crucial step in fusing data from several sensors of different types. This paper introduces a practical, real-time approach to integrate heterogeneous sensor data to achieve higher accuracy than would be possible from any one individual sensor in localizing a mobile robot. We use this approach in both indoor and outdoor environments and it is especially appropriate for those environments like oil and gas refineries due to their sparse and featureless nature. We have studied the individual contribution of each sensor data to the overall combined accuracy achieved from the fusion process. A Sequential Update Extended Kalman Filter(EKF) using validation gates was used to integrate GPS data, Compass data, WiFi data, Inertial Measurement Unit(IMU) data, Vehicle Velocity, and pose estimates from Fiducial marker system. Results show that the approach can enable a mobile robot to navigate autonomously in any environment using a priori information.Keywords: inspection mobile robot, navigation, sensor fusion, sequential update extended Kalman filter
Procedia PDF Downloads 47225846 A Comparative Study on Supercritical C02 and Water as Working Fluids in a Heterogeneous Geothermal Reservoir
Authors: Musa D. Aliyu, Ouahid Harireche, Colin D. Hills
Abstract:
The incapability of supercritical C02 to transport and dissolve mineral species from the geothermal reservoir to the fracture apertures and other important parameters in heat mining makes it an attractive substance for Heat extraction from hot dry rock. In other words, the thermodynamic efficiency of hot dry rock (HDR) reservoirs also increases if supercritical C02 is circulated at excess temperatures of 3740C without the drawbacks connected with silica dissolution. Studies have shown that circulation of supercritical C02 in homogenous geothermal reservoirs is quite encouraging; in comparison to that of the water. This paper aims at investigating the aforementioned processes in the case of the heterogeneous geothermal reservoir located at the Soultz site (France). The MultiPhysics finite element package COMSOL with an interface of coupling different processes encountered in the geothermal reservoir stimulation is used. A fully coupled numerical model is developed to study the thermal and hydraulic processes in order to predict the long-term operation of the basic reservoir parameters that give optimum energy production. The results reveal that the temperature of the SCC02 at the production outlet is higher than that of water in long-term stimulation; as the temperature is an essential ingredient in rating the energy production. It is also observed that the mass flow rate of the SCC02 is far more favourable compared to that of water.Keywords: FEM, HDR, heterogeneous reservoir, stimulation, supercritical C02
Procedia PDF Downloads 38525845 Use of Machine Learning in Data Quality Assessment
Authors: Bruno Pinto Vieira, Marco Antonio Calijorne Soares, Armando Sérgio de Aguiar Filho
Abstract:
Nowadays, a massive amount of information has been produced by different data sources, including mobile devices and transactional systems. In this scenario, concerns arise on how to maintain or establish data quality, which is now treated as a product to be defined, measured, analyzed, and improved to meet consumers' needs, which is the one who uses these data in decision making and companies strategies. Information that reaches low levels of quality can lead to issues that can consume time and money, such as missed business opportunities, inadequate decisions, and bad risk management actions. The step of selecting, identifying, evaluating, and selecting data sources with significant quality according to the need has become a costly task for users since the sources do not provide information about their quality. Traditional data quality control methods are based on user experience or business rules limiting performance and slowing down the process with less than desirable accuracy. Using advanced machine learning algorithms, it is possible to take advantage of computational resources to overcome challenges and add value to companies and users. In this study, machine learning is applied to data quality analysis on different datasets, seeking to compare the performance of the techniques according to the dimensions of quality assessment. As a result, we could create a ranking of approaches used, besides a system that is able to carry out automatically, data quality assessment.Keywords: machine learning, data quality, quality dimension, quality assessment
Procedia PDF Downloads 14825844 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer
Authors: Binder Hans
Abstract:
Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas
Procedia PDF Downloads 14825843 Hybrid Weighted Multiple Attribute Decision Making Handover Method for Heterogeneous Networks
Authors: Mohanad Alhabo, Li Zhang, Naveed Nawaz
Abstract:
Small cell deployment in 5G networks is a promising technology to enhance capacity and coverage. However, unplanned deployment may cause high interference levels and high number of unnecessary handovers, which in turn will result in an increase in the signalling overhead. To guarantee service continuity, minimize unnecessary handovers, and reduce signalling overhead in heterogeneous networks, it is essential to properly model the handover decision problem. In this paper, we model the handover decision according to Multiple Attribute Decision Making (MADM) method, specifically Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS). In this paper, we propose a hybrid TOPSIS method to control the handover in heterogeneous network. The proposed method adopts a hybrid weighting, which is a combination of entropy and standard deviation. A hybrid weighting control parameter is introduced to balance the impact of the standard deviation and entropy weighting on the network selection process and the overall performance. Our proposed method shows better performance, in terms of the number of frequent handovers and the mean user throughput, compared to the existing methods.Keywords: handover, HetNets, interference, MADM, small cells, TOPSIS, weight
Procedia PDF Downloads 14925842 Heterogeneous and Homogeneous Photocatalytic Degradation of Acid Orange 10 in Aqueous Solution
Authors: Merouani Djilali Redha, F. Abdelmalek, A. A. Addou
Abstract:
Advanced oxidation processes (AOPs) utilizing Homogenous photocatalysis (Fenton and photo-Fenton reactions), and Heterogeneous photocatalyse (TiO2 and ZnO) were investigated for the degradation of commercial azo dye ‘Orange G’ wastewater. Fenton and photo-Fenton experimental conditions were: Hydrogen peroxide concentration (10-2 M), Ferrous ions concentration (5.10-4 M), pH (2.8 – 3), UV lamp power (6 watt). Adding more ferrous ions enhanced the oxidation rate for the H2O2/Fe2+ and UV/H2O2/Fe2+ processes. The optimum catalyst loading was found 2.0 g.L-1 in our case for both catalysts TiO2 and ZnO. A comparative study of the photocatalytic degradation showed that these two catalysts have a comparable reactivity; it follows a pseudo-first-order kinetics. The degradation trends followed the order: UV365/Fenton > UV365/TiO2 > Solar Fenton > Solar TiO2 > Fenton ~UV365/ZnO. Among AOPs, processes using Fenton type reagent are relatively cheap and easy to operate and maintain. Moreover, UV365/Fenton process has been shown as effective in the treatment of OG dye. Dye was degraded following second-order kinetics. The rate constants was 0,041 .10+6 L.M-1.min-1. The degradation was followed by spectrophotometric method, chemical oxygen demand (COD) measures and high performance liquid chromatography analyses (HPLC). Some aromatic and aliphatic degradation compounds were identified. Degradation of Orange G by UV Fenton mechanism was also proposed.Keywords: AOPs, homogeneous catalysis, heterogeneous catalysis, acid orange 10, hydroxyl radical
Procedia PDF Downloads 41025841 Oxidation of Alcohols Types Using Nano-Graphene Oxide (NGO) as Heterogeneous Catalyst
Authors: Ali Gharib, Leila Vojdanifard, Nader Noroozi Pesyan, Mina Roshani
Abstract:
We describe an efficient method for oxidation of alcohols to related aldehydes and ketones by hydrogen peroxide as oxidizing agent, under reflux conditions. Nano-graphene oxide (NGO) as a heterogeneous catalyst was used and had their activity compared with other various catalysts. This catalyst was found to be an excellent catalyst for oxidation of alcohols. The effects of various parameters, including catalyst type, nature of the substituent in the alcohols and temperature, on the yield of the carboxylic acids were studied. Nano-graphene oxide was synthesized by the oxidation of graphite powders. This nanocatalyst was found to be highly efficient in this reaction and products were obtained in good to excellent yields. The recovered nano-catalyst was successfully reused for several runs without significant loss in its catalytic activity.Keywords: nano-graphene oxide, oxidation, aldehyde, ketone, catalyst
Procedia PDF Downloads 42425840 Application of Simulation of Discrete Events in Resource Management of Massive Concreting
Authors: Mohammad Amin Hamedirad, Seyed Javad Vaziri Kang Olyaei
Abstract:
Project planning and control are one of the most critical issues in the management of construction projects. Traditional methods of project planning and control, such as the critical path method or Gantt chart, are not widely used for planning projects with discrete and repetitive activities, and one of the problems of project managers is planning the implementation process and optimal allocation of its resources. Massive concreting projects is also a project with discrete and repetitive activities. This study uses the concept of simulating discrete events to manage resources, which includes finding the optimal number of resources considering various limitations such as limitations of machinery, equipment, human resources and even technical, time and implementation limitations using analysis of resource consumption rate, project completion time and critical points analysis of the implementation process. For this purpose, the concept of discrete-event simulation has been used to model different stages of implementation. After reviewing the various scenarios, the optimal number of allocations for each resource is finally determined to reach the maximum utilization rate and also to reduce the project completion time or reduce its cost according to the existing constraints. The results showed that with the optimal allocation of resources, the project completion time could be reduced by 90%, and the resulting costs can be reduced by up to 49%. Thus, allocating the optimal number of project resources using this method will reduce its time and cost.Keywords: simulation, massive concreting, discrete event simulation, resource management
Procedia PDF Downloads 14825839 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network
Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson
Abstract:
The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0
Procedia PDF Downloads 18225838 A Clustering Algorithm for Massive Texts
Authors: Ming Liu, Chong Wu, Bingquan Liu, Lei Chen
Abstract:
Internet users have to face the massive amount of textual data every day. Organizing texts into categories can help users dig the useful information from large-scale text collection. Clustering, in fact, is one of the most promising tools for categorizing texts due to its unsupervised characteristic. Unfortunately, most of traditional clustering algorithms lose their high qualities on large-scale text collection. This situation mainly attributes to the high- dimensional vectors generated from texts. To effectively and efficiently cluster large-scale text collection, this paper proposes a vector reconstruction based clustering algorithm. Only the features that can represent the cluster are preserved in cluster’s representative vector. This algorithm alternately repeats two sub-processes until it converges. One process is partial tuning sub-process, where feature’s weight is fine-tuned by iterative process. To accelerate clustering velocity, an intersection based similarity measurement and its corresponding neuron adjustment function are proposed and implemented in this sub-process. The other process is overall tuning sub-process, where the features are reallocated among different clusters. In this sub-process, the features useless to represent the cluster are removed from cluster’s representative vector. Experimental results on the three text collections (including two small-scale and one large-scale text collections) demonstrate that our algorithm obtains high quality on both small-scale and large-scale text collections.Keywords: vector reconstruction, large-scale text clustering, partial tuning sub-process, overall tuning sub-process
Procedia PDF Downloads 43525837 Increase the Ductility of Tall Buildings Using Green Material Bamboo for Earthquake Zone
Authors: Shef Amir Arasy
Abstract:
In 2023, the world's population will be 7.8 billion, which has increased significantly in the last 20 years. Every country in the world is experiencing the impacts of climate change directly and indirectly. However, the community still needs to build massive infrastructure and buildings. The massive CO2 emissions which lead to climate change come from cement usage in construction activity. Bamboo is one of the most sustainable materials for reducing carbon emissions and releasing more than 30% oxygen compared to the mass of trees. Besides, bamboo harvest time is faster than other sustainable materials, around 3-4 years. Furthermore, Bamboo has a high tensile strength, which can provide ductility effectively to prevent damage to buildings during an earthquake. By the finite element method, this research analyzes bamboo configuration and connection for tall building structures under different earthquake frequencies and fire. The aim of this research is to provide proper design and connection of bamboo buildings that can be more reliable than concrete structures.Keywords: bamboo, concrete, ductility, earthquake.
Procedia PDF Downloads 7225836 Research of Database Curriculum Construction under the Environment of Massive Open Online Courses
Authors: Wang Zhanquan, Yang Zeping, Gu Chunhua, Zhu Fazhi, Guo Weibin
Abstract:
Recently, Massive Open Online Courses (MOOCs) are becoming the new trend of education. There are many problems under the environment of Database Principle curriculum teaching process in MOOCs, such as teaching ideas and theories which are out of touch with the reality, how to carry out the technical teaching and interactive practice in the MOOCs environment, thus the methods of database course under the environment of MOOCs are proposed. There are three processes to deal with problem solving in the research, which are problems proposed, problems solved, and inductive analysis. The present research includes the design of teaching contents, teaching methods in classroom, flipped classroom teaching mode under the environment of MOOCs, learning flow method and large practice homework. The database designing ability is systematically improved based on the researching methods.Keywords: problem solving-driven, MOOCs, teaching art, learning flow;
Procedia PDF Downloads 36325835 Production of Biodiesel Using Brine Waste as a Heterogeneous Catalyst
Authors: Hilary Rutto, Linda Sibali
Abstract:
In these modern times, we constantly search for new and innovative technologies to lift the burden of our extreme energy demand. The overall purpose of biofuel production research is to source an alternative energy source to replace the normal use of fossil fuel as liquid petroleum products. This experiment looks at the basis of biodiesel production with regards to alternative catalysts that can be used to produce biodiesel. The key factors that will be addressed during the experiments will focus on temperature variation, catalyst additions to the overall reaction, methanol to oil ratio, and the impact of agitation on the reaction. Brine samples sources from nearby plants will be evaluated and tested thoroughly and the key characteristics of these brine samples analysed for the verification of its use as a possible catalyst in biodiesel production. The one factor at a time experimental approach was used in this experiment, and the recycle and reuse characteristics of the heterogeneous catalyst was evaluated.Keywords: brine sludge, heterogenous catalyst, biodiesel, one factor
Procedia PDF Downloads 17125834 Photocatalytic Degradation of Naproxen in Water under Solar Irradiation over NiFe₂O₄ Nanoparticle System
Authors: H. Boucheloukh, S. Rouissa, N. Aoun, M. Beloucifa, T. Sehili, F. Parrino, V. Loddo
Abstract:
To optimize water purification and wastewater treatment by heterogeneous photocatalysis, we used NiFe₂O₄ as a catalyst and solar irradiation as a source of energy. In this concept, an organic substance present in many industrial effluents was chosen: naproxen ((S)-6-methoxy-α-methyl-2-naphthaleneacetic acid or 2-(6-methoxynaphthalenyl) propanoic), a non-steroidal anti-inflammatory drug. The main objective of this study is to degrade naproxen by an iron and nickel catalyst, the degradation of this organic pollutant by nickel ferrite has been studied in a heterogeneous aqueous medium, with the study of the various factors influencing photocatalysis such as the concentration of matter and the acidity of the medium. The photocatalytic activity was followed by HPLC-UV andUV-Vis spectroscopy. A first-order kinetic model appropriately fitted the experimental data. The degradation of naproxen was also studied in the presence of H₂O₂ as well as in an aqueous solution. The new hetero-system NiFe₂O₄/oxalic acid is also discussed. The fastest naproxen degradation was obtained with NiFe₂O₄/H₂O₂. In a first-place, we detailed the characteristics of the material NiFe₂O₄, which was synthesized by the sol-gel methods, using various analytical techniques: visible UV spectrophotometry, X-ray diffraction, FTIR, cyclic voltammetry, luminescent discharge optical emission spectroscopy.Keywords: naproxen, nickelate, photocatalysis, oxalic acid
Procedia PDF Downloads 21025833 Cytogenetic Characterization of the VERO Cell Line Based on Comparisons with the Subline; Implication for Authorization and Quality Control of Animal Cell Lines
Authors: Fumio Kasai, Noriko Hirayama, Jorge Pereira, Azusa Ohtani, Masashi Iemura, Malcolm A. Ferguson Smith, Arihiro Kohara
Abstract:
The VERO cell line was established in 1962 from normal tissue of an African green monkey, Chlorocebus aethiops (2n=60), and has been commonly used worldwide for screening for toxins or as a cell substrate for the production of viral vaccines. The VERO genome was sequenced in 2014; however, its cytogenetic features have not been fully characterized as it contains several chromosome abnormalities and different karyotypes coexist in the cell line. In this study, the VERO cell line (JCRB0111) was compared with one of the sublines. In contrast to 59 chromosomes as the modal chromosome number in the VERO cell line, the subline had two peaks of 56 and 58 chromosomes. M-FISH analysis using human probes revealed that the VERO cell line was characterized by a translocation t(2;25) found in all metaphases, which was absent in the subline. Different abnormalities detected only in the subline show that the cell line is heterogeneous, indicating that the subline has the potential to change its genomic characteristics during cell culture. The various alterations in the two independent lineages suggest that genomic changes in both VERO cells can be accounted for by progressive rearrangements during their evolution in culture. Both t(5;X) and t(8;14) observed in all metaphases of the two cell lines might have a key role in VERO cells and could be used as genetic markers to identify VERO cells. The flow karyotype shows distinct differences from normal. Further analysis of sorted abnormal chromosomes may uncover other characteristics of VERO cells. Because of the absence of STR data, cytogenetic data are important in characterizing animal cell lines and can be an indicator of their quality control.Keywords: VERO, cell culture passage, chromosome rearrangement, heterogeneous cells
Procedia PDF Downloads 41625832 Oxalate Method for Assessing the Electrochemical Surface Area for Ni-Based Nanoelectrodes Used in Formaldehyde Sensing Applications
Authors: S. Trafela, X. Xua, K. Zuzek Rozmana
Abstract:
In this study, we used an accurate and precise method to measure the electrochemically active surface areas (Aecsa) of nickel electrodes. Calculated Aecsa is really important for the evaluation of an electro-catalyst’s activity in electrochemical reaction of different organic compounds. The method involves the electrochemical formation of Ni(OH)₂ and NiOOH in the presence of adsorbed oxalate in alkaline media. The studies were carried out using cyclic voltammetry with polycrystalline nickel as a reference material and electrodeposited nickel nanowires, homogeneous and heterogeneous nickel films. From cyclic voltammograms, the charge (Q) values for the formation of Ni(OH)₂ and NiOOH surface oxides were calculated under various conditions. At sufficiently fast potential scan rates (200 mV s⁻¹), the adsorbed oxalate limits the growth of the surface hydroxides to a monolayer. Although the Ni(OH)₂/NiOOH oxidation peak overlaps with the oxygen evolution reaction, in the reverse scan, the NiOOH/ Ni(OH)₂ reduction peak is well-separated from other electrochemical processes and can be easily integrated. The values of these integrals were used to correlate experimentally measured charge density with an electrochemically active surface layer. The Aecsa of the nickel nanowires, homogeneous and heterogeneous nickel films were calculated to be Aecsa-NiNWs = 4.2066 ± 0.0472 cm², Aecsa-homNi = 1.7175 ± 0.0503 cm² and Aecsa-hetNi = 2.1862 ± 0.0154 cm². These valuable results were expanded and used in electrochemical studies of formaldehyde oxidation. As mentioned nickel nanowires, heterogeneous and homogeneous nickel films were used as simple and efficient sensor for formaldehyde detection. For this purpose, electrodeposited nickel electrodes were modified in 0.1 mol L⁻¹ solution of KOH in order to expect electrochemical activity towards formaldehyde. The investigation of the electrochemical behavior of formaldehyde oxidation in 0.1 mol L⁻¹ NaOH solution at the surface of modified nickel nanowires, homogeneous and heterogeneous nickel films were carried out by means of electrochemical techniques such as cyclic voltammetric and chronoamperometric methods. From investigations of effect of different formaldehyde concentrations (from 0.001 to 0.1 mol L⁻¹) on electrochemical signal - current we provided catalysis mechanism of formaldehyde oxidation, detection limit and sensitivity of nickel electrodes. The results indicated that nickel electrodes participate directly in the electrocatalytic oxidation of formaldehyde. In the overall reaction, formaldehyde in alkaline aqueous solution exists predominantly in form of CH₂(OH)O⁻, which is oxidized to CH₂(O)O⁻. Taking into account the determined (Aecsa) values we have been able to calculate the sensitivities: 7 mA mol L⁻¹ cm⁻² for nickel nanowires, 3.5 mA mol L⁻¹ cm⁻² for heterogeneous nickel film and 2 mA mol L⁻¹ cm⁻² for heterogeneous nickel film. The detection limit was 0.2 mM for nickel nanowires, 0.5 mM for porous Ni film and 0.8 mM for homogeneous Ni film. All of these results make nickel electrodes capable for further applications.Keywords: electrochemically active surface areas, nickel electrodes, formaldehyde, electrocatalytic oxidation
Procedia PDF Downloads 16125831 Turkish Graduate Students' Perceptions of Drop Out Issues in Massive Open Online Courses
Authors: Harun Bozna
Abstract:
MOOC (massive open online course) is a groundbreaking education platform and a current buzzword in higher education. Although MOOCs offer many appreciated learning experiences to learners from various universities and institutions, they have considerably higher dropout rates than traditional education. Only about 10% of the learners who enroll in MOOCs actually complete the course. In this case, perceptions of participants and a comprehensive analysis of MOOCs have become an essential part of the research in this area. This study aims to explore the MOOCs in detail for better understanding its content, purpose and primarily drop out issues. The researcher conducted an online questionnaire to get perceptions of graduate students on their learning experiences in MOOCs and arranged a semi- structured oral interview with some participants. The participants are Turkish graduate level students doing their MA and Ph.D. in various programs. The findings show that participants are more likely to drop out courses due to lack of time and lack of pressure.Keywords: distance education, MOOCs, drop out, perception of graduate students
Procedia PDF Downloads 240