Search results for: crowding distance sorting
1510 Self-Supervised Learning for Hate-Speech Identification
Authors: Shrabani Ghosh
Abstract:
Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.Keywords: attention learning, language model, offensive language detection, self-supervised learning
Procedia PDF Downloads 1061509 Using GIS and AHP Model to Explore the Parking Problem in Khomeinishahr
Authors: Davood Vatankhah, Reza Mokhtari Malekabadi, Mohsen Saghaei
Abstract:
Function of urban transportation systems depends on the existence of the required infrastructures, appropriate placement of different components, and the cooperation of these components with each other. Establishing various neighboring parking spaces in city neighborhood in order to prevent long-term and inappropriate parking of cars in the allies is one of the most effective operations in reducing the crowding and density of the neighborhoods. Every place with a certain application attracts a number of daily travels which happen throughout the city. A large percentage of the people visiting these places go to these travels by their own cars; therefore, they need a space to park their cars. The amount of this need depends on the usage function and travel demand of the place. The study aims at investigating the spatial distribution of the public parking spaces, determining the effective factors in locating, and their combination in GIS environment in Khomeinishahr of Isfahan city. Ultimately, the study intends to create an appropriate pattern for locating parking spaces, determining the request for parking spaces of the traffic areas, choosing the proper places for providing the required public parking spaces, and also proposing new spots in order to promote quality and quantity aspects of the city in terms of enjoying public parking spaces. Regarding the method, the study is based on applied purpose and regarding nature, it is analytic-descriptive. The population of the study includes people of the center of Khomeinishahr which is located on Northwest of Isfahan having about 5000 hectares of geographic area and the population of 241318 people are in the center of Komeinishahr. In order to determine the sample size, Cochran formula was used and according to the population of 26483 people of the studied area, 231 questionnaires were used. Data analysis was carried out by usage of SPSS software and after estimating the required space for parking spaces, initially, the effective criteria in locating the public parking spaces are weighted by the usage of Analytic Hierarchical Process in the Arc GIS software. Then, appropriate places for establishing parking spaces were determined by fuzzy method of Order Weighted Average (OWA). The results indicated that locating of parking spaces in Khomeinishahr have not been carried out appropriately and per capita of the parking spaces is not desirable in relation to the population and request; therefore, in addition to the present parking lots, 1434 parking lots are needed in the area of the study for each day; therefore, there is not a logical proportion between parking request and the number of parking lots in Khomeinishahr.Keywords: GIS, locating, parking, khomeinishahr
Procedia PDF Downloads 3091508 Robust ANOVA: An Illustrative Study in Horticultural Crop Research
Authors: Dinesh Inamadar, R. Venugopalan, K. Padmini
Abstract:
An attempt has been made in the present communication to elucidate the efficacy of robust ANOVA methods to analyze horticultural field experimental data in the presence of outliers. Results obtained fortify the use of robust ANOVA methods as there was substantiate reduction in error mean square, and hence the probability of committing Type I error, as compared to the regular approach.Keywords: outliers, robust ANOVA, horticulture, cook distance, type I error
Procedia PDF Downloads 3901507 Epigenomic Analysis of Lgr5+ Stem Cells in Gastrointestinal Tract
Authors: Hyo-Min Kim, Seokjin Ham, Mi-Joung Yoo, Minseon Kim, Tae-Young Roh
Abstract:
The gastrointestinal (GI) tract of most animals, including murine, is highly compartmentalized epithelia which also provide distinct different functions of its own tissue. Nevertheless, these epithelia share certain characteristics that enhance immune responses to infections and maintain the barrier function of the intestine. GI tract epithelia also undergo regeneration not only in homeostatic conditions but also in a response to the damage. A full turnover of the murine gastrointestinal epithelium occurs every 4-5 day, a process that is regulated and maintained by a minor population of Lgr5+ adult stem cell that commonly conserved in the bottom of crypts through GI tract. Maintenance of the stem cell is somehow regulated by epigenetic factors according to recent studies. Chromatin vacancy, remodelers, histone variants and histone modifiers could affect adult stem cell fate. In this study, Lgr5-EGFP reporter mouse was used to take advantage of exploring the epigenetic dynamics among Lgr5 positive mutual stem cell in GI tract. Cells were isolated by fluorescence-activated cell sorting (FACS), gene expression levels, chromatin accessibility changes and histone modifications were analyzed. Some notable chromatin structural related epigenetic variants were detected. To identify the overall cell-cell interaction inside the stem cell niche, an extensive genome-wide analysis should be also followed. According to the results, nevertheless, we expected a broader understanding of cellular niche maintaining stem cells and epigenetic barriers through conserved stem cell in GI tract. We expect that our study could provide more evidence of adult stem cell plasticity and more chances to understand each stem cell that takes parts in certain organs.Keywords: adult stem cell, epigenetics, LGR5 stem cell, gastrointestinal tract
Procedia PDF Downloads 2291506 Probabilistic Graphical Model for the Web
Authors: M. Nekri, A. Khelladi
Abstract:
The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.Keywords: clustering coefficient, preferential attachment, small world, web community
Procedia PDF Downloads 2721505 Infrared Detection Device for Accurate Scanning 3D Objects
Authors: Evgeny A. Rybakov, Dmitry P. Starikov
Abstract:
This article contains information about creating special unit for scanning 3D objects different nature, different materials, for example plastic, plaster, cardboard, wood, metal and etc. The main part of the unit is infrared transducer, which is sends the wave to the object and receive back wave for calculating distance. After that, microcontroller send to PC data, and computer program create model for printing from the plastic, gypsum, brass, etc.Keywords: clutch, infrared, microcontroller, plastic, shaft, stage
Procedia PDF Downloads 4431504 Gc-ms Data Integrated Chemometrics for the Authentication of Vegetable Oil Brands in Minna, Niger State, Nigeria
Authors: Rasaq Bolakale Salau, Maimuna Muhammad Abubakar, Jonathan Yisa, Muhammad Tauheed Bisiriyu, Jimoh Oladejo Tijani, Alexander Ifeanyi Ajai
Abstract:
Vegetables oils are widely consumed in Nigeria. This has led to competitive manufacture of various oil brands. This leads increasing tendencies for fraud, labelling misinformation and other unwholesome practices. A total of thirty samples including raw and corresponding branded samples of vegetable oils were collected. The Oils were extracted from raw ground nut, soya bean and oil palm fruits. The GC-MS data was subjected to chemometric techniques of PCA and HCA. The SOLO 8.7 version of the standalone chemometrics software developed by Eigenvector research incorporated and powered by PLS Toolbox was used. The GCMS fingerprint gave basis for discrimination as it reveals four predominant but unevenly distributed fatty acids: Hexadecanoic acid methyl ester (10.27- 45.21% PA), 9,12-octadecadienoic acid methyl ester (10.9 - 45.94% PA), 9-octadecenoic acid methyl ester (18.75 - 45.65%PA), and Eicosanoic acid methyl ester (1.19% - 6.29%PA). In PCA modelling, two PCs are retained at cumulative variance captured at 73.15%. The score plots indicated that palm oil brands are most aligned with raw palm oil. PCA loading plot reveals the signature retention times between 4.0 and 6.0 needed for quality assurance and authentication of the oils samples. They are of aromatic hydrocarbons, alcohols and aldehydes functional groups. HCA dendrogram which was modeled using Euclidian distance through Wards method, indicated co-equivalent samples. HCA revealed the pair of raw palm oil brand and palm oil brand in the closest neighbourhood (± 1.62 % A difference) based on variance weighted distance. It showed Palm olein brand to be most authentic. In conclusion, based on the GCMS data with chemometrics, the authenticity of the branded samples is ranked as: Palm oil > Soya oil > groundnut oil.Keywords: vegetable oil, authenticity, chemometrics, PCA, HCA, GC-MS
Procedia PDF Downloads 311503 Revealing the Sustainable Development Mechanism of Guilin Tourism Based on Driving Force/Pressure/State/Impact/Response Framework
Authors: Xiujing Chen, Thammananya Sakcharoen, Wilailuk Niyommaneerat
Abstract:
China's tourism industry is in a state of shock and recovery, although COVID-19 has brought great impact and challenges to the tourism industry. The theory of sustainable development originates from the contradiction of increasing awareness of environmental protection and the pursuit of economic interests. The sustainable development of tourism should consider social, economic, and environmental factors and develop tourism in a planned and targeted way from the overall situation. Guilin is one of the popular tourist cities in China. However, there exist several problems in Guilin tourism, such as low quality of scenic spot construction and low efficiency of tourism resource development. Due to its unwell-managed, Guilin's tourism industry is facing problems such as supply and demand crowding pressure for tourists. According to the data from 2009 to 2019, there is a change in the degree of sustainable development of Guilin tourism. This research aimed to evaluate the sustainable development state of Guilin tourism using the DPSIR (driving force/pressure/state/impact/response) framework and to provide suggestions and recommendations for sustainable development in Guilin. An improved TOPSIS (technology for order preference by similarity to an ideal solution) model based on the entropy weights relationship is applied to the quantitative analysis and to analyze the mechanisms of sustainable development of tourism in Guilin. The DPSIR framework organizes indicators into sub-five categories: of which twenty-eight indicators related to sustainable aspects of Guilin tourism are classified. The study analyzed and summarized the economic, social, and ecological effects generated by tourism development in Guilin from 2009-2019. The results show that the conversion rate of tourism development in Guilin into regional economic benefits is more efficient than that into social benefits. Thus, tourism development is an important driving force of Guilin's economic growth. In addition, the study also analyzed the static weights of 28 relevant indicators of sustainable development of tourism in Guilin and ranked them from largest to smallest. Then it was found that the economic and social factors related to tourism revenue occupy the highest weight, which means that the economic and social development of Guilin can influence the sustainable development of Guilin tourism to a greater extent. Therefore, there is a two-way causal relationship between tourism development and economic growth in Guilin. At the same time, ecological development-related indicators also have relatively large weights, so ecological and environmental resources also have a great influence on the sustainable development of Guilin tourism.Keywords: DPSIR framework, entropy weights analysis, sustainable development of tourism, TOPSIS analysis
Procedia PDF Downloads 981502 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: Gaelle Candel, David Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning
Procedia PDF Downloads 1441501 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest
Authors: Peter Baji
Abstract:
In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study
Procedia PDF Downloads 1981500 Synthesis, Characterization, Optical and Photophysical Properties of Pyrene-Labeled Ruthenium(Ii) Trisbipyridine Complex Cored Dendrimers
Authors: Mireille Vonlanthen, Pasquale Porcu, Ernesto Rivera
Abstract:
Dendritic macromolecules are presenting unique physical and chemical properties. One of them is the faculty of transferring energy from a donor moiety introduced at the periphery to an acceptor moiety at the core, mimicking the antenna effect of the process of photosynthesis. The mechanism of energy transfer is based on the Förster resonance energy exchange and requires some overlap between the emission spectrum of the donor and the absorption spectrum of the acceptor. Since it requires a coupling of transition dipole but no overlap of the physical wavefunctions, the energy transfer by Förster mechanism can occur over quite long distances from 1 to a maximum of 10 nm. However, the efficiency of the transfer depends strongly on distance. The Förster radius is the distance at which 50% of the donor’s emission is deactivated by FRET. In this work, we synthesized and characterized a novel series of dendrimers bearing pyrene moieties at the periphery and a Ru (II) complex at the core. The optical and photophysical properties of these compounds were studied by absorption and fluorescence spectroscopy. Pyrene is a well-studied chromophore that has the particularity to present monomer as well as excimer fluorescence emission. The coordination compounds of Ru (II) are red emitters with low quantum yield and long excited lifetime. We observed an efficient singulet to singulet energy transfer in such constructs. Moreover, it is known that the energy of the MLCT emitting state of Ru (II) can be tuned to become almost isoenegetic with respect to the triplet state of pyrene, leading to an extended phosphorescence lifetime. Using dendrimers bearing pyrene moieties as ligands for Ru (II), we could combine the antenna effect of dendrimers as well as its protection effect to the quenching by dioxygen with lifetime increase due to triplet-triplet equilibrium.Keywords: dendritic molecules, energy transfer, pyrene, ru-trisbipyridine complex
Procedia PDF Downloads 2771499 The Role of Institutions in Community Wildlife Conservation in Zimbabwe
Authors: Herbert Ntuli, Edwin Muchapondwa
Abstract:
This study used a sample of 336 households and community level data from 30 communities around the Gonarezhou National Park in Zimbabwe to analyse the association between ability to self-organize or cooperation and institutions on one hand and the relationship between success of biodiversity outcomes and cooperation on the other hand. Using both the ordinary least squares and instrumental variables estimation with heteroskedasticity-based instruments, our results confirmed that sound institutions are indeed an important ingredient for cooperation in the respective communities and cooperation positively and significantly affects biodiversity outcomes. Group size, community level trust, the number of stakeholders and punishment were found to be important variables explaining cooperation. From a policy perspective, our results show that external enforcement of rules and regulations does not necessarily translate into sound ecological outcomes but better outcomes are attainable when punishment is rather endogenized by local communities. This seems to suggest that communities should rather be supported in such a way that robust institutions that are tailor made to suit the needs of local condition will emerge that will in turn facilitate good environmental husbandry. Cooperation, training, benefits, distance from the nearest urban canter, distance from the fence, social capital average age of household head, fence and information sharing were found to be very important variables explaining the success of biodiversity outcomes ceteris paribus. Government programmes should target capacity building in terms of institutional capacity and skills development in order to have a positive impact on biodiversity. Hence, the role of stakeholders (e.g., NGOs) in capacity building and government effort should complement each other to ensure that the necessary resources are mobilized and all communities receive the necessary training and resources.Keywords: institutions, self-organize, common pool resources, wildlife, conservation, Zimbabwe
Procedia PDF Downloads 2811498 Reducing CO2 Emission Using EDA and Weighted Sum Model in Smart Parking System
Authors: Rahman Ali, Muhammad Sajjad, Farkhund Iqbal, Muhammad Sadiq Hassan Zada, Mohammed Hussain
Abstract:
Emission of Carbon Dioxide (CO2) has adversely affected the environment. One of the major sources of CO2 emission is transportation. In the last few decades, the increase in mobility of people using vehicles has enormously increased the emission of CO2 in the environment. To reduce CO2 emission, sustainable transportation system is required in which smart parking is one of the important measures that need to be established. To contribute to the issue of reducing the amount of CO2 emission, this research proposes a smart parking system. A cloud-based solution is provided to the drivers which automatically searches and recommends the most preferred parking slots. To determine preferences of the parking areas, this methodology exploits a number of unique parking features which ultimately results in the selection of a parking that leads to minimum level of CO2 emission from the current position of the vehicle. To realize the methodology, a scenario-based implementation is considered. During the implementation, a mobile application with GPS signals, vehicles with a number of vehicle features and a list of parking areas with parking features are used by sorting, multi-level filtering, exploratory data analysis (EDA, Analytical Hierarchy Process (AHP)) and weighted sum model (WSM) to rank the parking areas and recommend the drivers with top-k most preferred parking areas. In the EDA process, “2020testcar-2020-03-03”, a freely available dataset is used to estimate CO2 emission of a particular vehicle. To evaluate the system, results of the proposed system are compared with the conventional approach, which reveal that the proposed methodology supersedes the conventional one in reducing the emission of CO2 into the atmosphere.Keywords: car parking, Co2, Co2 reduction, IoT, merge sort, number plate recognition, smart car parking
Procedia PDF Downloads 1461497 Analysis and Identification of Different Factors Affecting Students’ Performance Using a Correlation-Based Network Approach
Authors: Jeff Chak-Fu Wong, Tony Chun Yin Yip
Abstract:
The transition from secondary school to university seems exciting for many first-year students but can be more challenging than expected. Enabling instructors to know students’ learning habits and styles enhances their understanding of the students’ learning backgrounds, allows teachers to provide better support for their students, and has therefore high potential to improve teaching quality and learning, especially in any mathematics-related courses. The aim of this research is to collect students’ data using online surveys, to analyze students’ factors using learning analytics and educational data mining and to discover the characteristics of the students at risk of falling behind in their studies based on students’ previous academic backgrounds and collected data. In this paper, we use correlation-based distance methods and mutual information for measuring student factor relationships. We then develop a factor network using the Minimum Spanning Tree method and consider further study for analyzing the topological properties of these networks using social network analysis tools. Under the framework of mutual information, two graph-based feature filtering methods, i.e., unsupervised and supervised infinite feature selection algorithms, are used to analyze the results for students’ data to rank and select the appropriate subsets of features and yield effective results in identifying the factors affecting students at risk of failing. This discovered knowledge may help students as well as instructors enhance educational quality by finding out possible under-performers at the beginning of the first semester and applying more special attention to them in order to help in their learning process and improve their learning outcomes.Keywords: students' academic performance, correlation-based distance method, social network analysis, feature selection, graph-based feature filtering method
Procedia PDF Downloads 1291496 Experimental Measurement of Equatorial Ring Current Generated by Magnetoplasma Sail in Three-Dimensional Spatial Coordinate
Authors: Masato Koizumi, Yuya Oshio, Ikkoh Funaki
Abstract:
Magnetoplasma Sail (MPS) is a future spacecraft propulsion that generates high levels of thrust by inducing an artificial magnetosphere to capture and deflect solar wind charged particles in order to transfer momentum to the spacecraft. By injecting plasma in the spacecraft’s magnetic field region, the ring current azimuthally drifts on the equatorial plane about the dipole magnetic field generated by the current flowing through the solenoid attached on board the spacecraft. This ring current results in magnetosphere inflation which improves the thrust performance of MPS spacecraft. In this present study, the ring current was experimentally measured using three Rogowski Current Probes positioned in a circular array about the laboratory model of MPS spacecraft. This investigation aims to determine the detailed structure of ring current through physical experimentation performed under two different magnetic field strengths engendered by varying the applied voltage on the solenoid with 300 V and 600 V. The expected outcome was that the three current probes would detect the same current since all three probes were positioned at equal radial distance of 63 mm from the center of the solenoid. Although experimental results were numerically implausible due to probable procedural error, the trends of the results revealed three pieces of perceptive evidence of the ring current behavior. The first aspect is that the drift direction of the ring current depended on the strength of the applied magnetic field. The second aspect is that the diamagnetic current developed at a radial distance not occupied by the three current probes under the presence of solar wind. The third aspect is that the ring current distribution varied along the circumferential path about the spacecraft’s magnetic field. Although this study yielded experimental evidence that differed from the original hypothesis, the three key findings of this study have informed two critical MPS design solutions that will potentially improve thrust performance. The first design solution is the positioning of the plasma injection point. Based on the implication of the first of the three aspects of ring current behavior, the plasma injection point must be located at a distance instead of at close proximity from the MPS Solenoid for the ring current to drift in the direction that will result in magnetosphere inflation. The second design solution, predicated by the third aspect of ring current behavior, is the symmetrical configuration of plasma injection points. In this study, an asymmetrical configuration of plasma injection points using one plasma source resulted in a non-uniform distribution of ring current along the azimuthal path. This distorts the geometry of the inflated magnetosphere which minimizes the deflection area for the solar wind. Therefore, to realize a ring current that best provides the maximum possible inflated magnetosphere, multiple plasma sources must be spaced evenly apart for the plasma to be injected evenly along its azimuthal path.Keywords: Magnetoplasma Sail, magnetosphere inflation, ring current, spacecraft propulsion
Procedia PDF Downloads 3101495 The Current Practices of Analysis of Reinforced Concrete Panels Subjected to Blast Loading
Authors: Palak J. Shukla, Atul K. Desai, Chentankumar D. Modhera
Abstract:
For any country in the world, it has become a priority to protect the critical infrastructure from looming risks of terrorism. In any infrastructure system, the structural elements like lower floors, exterior columns, walls etc. are key elements which are the most susceptible to damage due to blast load. The present study revisits the state of art review of the design and analysis of reinforced concrete panels subjected to blast loading. Various aspects in association with blast loading on structure, i.e. estimation of blast load, experimental works carried out previously, the numerical simulation tools, various material models, etc. are considered for exploring the current practices adopted worldwide. Discussion on various parametric studies to investigate the effect of reinforcement ratios, thickness of slab, different charge weight and standoff distance is also made. It was observed that for the simulation of blast load, CONWEP blast function or equivalent numerical equations were successfully employed by many researchers. The study of literature indicates that the researches were carried out using experimental works and numerical simulation using well known generalized finite element methods, i.e. LS-DYNA, ABAQUS, AUTODYN. Many researchers recommended to use concrete damage model to represent concrete and plastic kinematic material model to represent steel under action of blast loads for most of the numerical simulations. Most of the studies reveal that the increase reinforcement ratio, thickness of slab, standoff distance was resulted in better blast resistance performance of reinforced concrete panel. The study summarizes the various research results and appends the present state of knowledge for the structures exposed to blast loading.Keywords: blast phenomenon, experimental methods, material models, numerical methods
Procedia PDF Downloads 1571494 Model for Calculating Traffic Mass and Deceleration Delays Based on Traffic Field Theory
Authors: Liu Canqi, Zeng Junsheng
Abstract:
This study identifies two typical bottlenecks that occur when a vehicle cannot change lanes: car following and car stopping. The ideas of traffic field and traffic mass are presented in this work. When there are other vehicles in front of the target vehicle within a particular distance, a force is created that affects the target vehicle's driving speed. The characteristics of the driver and the vehicle collectively determine the traffic mass; the driving speed of the vehicle and external variables have no bearing on this. From a physical level, this study examines the vehicle's bottleneck when following a car, identifies the outside factors that have an impact on how it drives, takes into account that the vehicle will transform kinetic energy into potential energy during deceleration, and builds a calculation model for traffic mass. The energy-time conversion coefficient is created from an economic standpoint utilizing the social average wage level and the average cost of motor fuel. Vissim simulation program measures the vehicle's deceleration distance and delays under the Wiedemann car-following model. The difference between the measured value of deceleration delay acquired by simulation and the theoretical value calculated by the model is compared using the conversion calculation model of traffic mass and deceleration delay. The experimental data demonstrate that the model is reliable since the error rate between the theoretical calculation value of the deceleration delay obtained by the model and the measured value of simulation results is less than 10%. The article's conclusion is that the traffic field has an impact on moving cars on the road and that physical and socioeconomic factors should be taken into account while studying vehicle-following behavior. The deceleration delay value of a vehicle's driving and traffic mass have a socioeconomic relationship that can be utilized to calculate the energy-time conversion coefficient when dealing with the bottleneck of cars stopping and starting.Keywords: traffic field, social economics, traffic mass, bottleneck, deceleration delay
Procedia PDF Downloads 671493 Building an Arithmetic Model to Assess Visual Consistency in Townscape
Authors: Dheyaa Hussein, Peter Armstrong
Abstract:
The phenomenon of visual disorder is prominent in contemporary townscapes. This paper provides a theoretical framework for the assessment of visual consistency in townscape in order to achieve more favourable outcomes for users. In this paper, visual consistency refers to the amount of similarity between adjacent components of townscape. The paper investigates parameters which relate to visual consistency in townscape, explores the relationships between them and highlights their significance. The paper uses arithmetic methods from outside the domain of urban design to enable the establishment of an objective approach of assessment which considers subjective indicators including users’ preferences. These methods involve the standard of deviation, colour distance and the distance between points. The paper identifies urban space as a key representative of the visual parameters of townscape. It focuses on its two components, geometry and colour in the evaluation of the visual consistency of townscape. Accordingly, this article proposes four measurements. The first quantifies the number of vertices, which are points in the three-dimensional space that are connected, by lines, to represent the appearance of elements. The second evaluates the visual surroundings of urban space through assessing the location of their vertices. The last two measurements calculate the visual similarity in both vertices and colour in townscape by the calculation of their variation using methods including standard of deviation and colour difference. The proposed quantitative assessment is based on users’ preferences towards these measurements. The paper offers a theoretical basis for a practical tool which can alter the current understanding of architectural form and its application in urban space. This tool is currently under development. The proposed method underpins expert subjective assessment and permits the establishment of a unified framework which adds to creativity by the achievement of a higher level of consistency and satisfaction among the citizens of evolving townscapes.Keywords: townscape, urban design, visual assessment, visual consistency
Procedia PDF Downloads 3131492 Design and Characterization of Ecological Materials Based on Demolition and Concrete Waste, Casablanca (Morocco)
Authors: Mourad Morsli, Mohamed Tahiri, Azzedine Samdi
Abstract:
The Cities are the urbanized territories most favorable to the consumption of resources (materials, energy). In Morocco, the economic capital Casablanca is one of them, with its 4M inhabitants and its 60% share in the economic and industrial activity of the kingdom. In the absence of legal status in force, urban development has favored the generation of millions of tons of demolition and construction waste scattered in open spaces causing a significant nuisance to the environment and citizens. Hence the main objective of our work is to valorize concrete waste. The representative wastes are mainly concrete, concrete, and fired clay bricks, ceramic tiles, marble panels, gypsum, and scrap metal. The work carried out includes: geolocation with a combination of artificial intelligence, GIS, and Google Earth, which allowed the estimation of the quantity of these wastes per site; then the sorting, crushing, grinding, and physicochemical characterization of the collected samples allowed the definition of the exploitation ways for each extracted fraction for integrated management of the said wastes. In the present work, we proceeded to the exploitation of the fractions obtained after sieving the representative samples to incorporate them in the manufacture of new ecological materials for construction. These formulations prepared studies have been tested and characterized: physical criteria (specific surface, resistance to flexion and compression) and appearance (cracks, deformation). We will present in detail the main results of our research work and also describe the specific properties of each material developed.Keywords: demolition and construction waste, GIS combination software, inert waste recovery, ecological materials, Casablanca, Morocco
Procedia PDF Downloads 1351491 An Integrated Experimental and Numerical Approach to Develop an Electronic Instrument to Study Apple Bruise Damage
Authors: Paula Pascoal-Faria, Rúben Pereira, Elodie Pinto, Miguel Belbut, Ana Rosa, Inês Sousa, Nuno Alves
Abstract:
Apple bruise damage from harvesting, handling, transporting and sorting is considered to be the major source of reduced fruit quality, resulting in loss of profits for the entire fruit industry. The three factors which can physically cause fruit bruising are vibration, compression load and impact, the latter being the most common source of bruise damage. Therefore, prediction of the level of damage, stress distribution and deformation of the fruits under external force has become a very important challenge. In this study, experimental and numerical methods were used to better understand the impact caused when an apple is dropped from different heights onto a plastic surface and a conveyor belt. Results showed that the extent of fruit damage is significantly higher for plastic surface, being dependent on the height. In order to support the development of a biomimetic electronic device for the determination of fruit damage, the mechanical properties of the apple fruit were determined using mechanical tests. Preliminary results showed different values for the Young’s modulus according to the zone of the apple tested. Along with the mechanical characterization of the apple fruit, the development of the first two prototypes is discussed and the integration of the results obtained to construct the final element model of the apple is presented. This work will help to reduce significantly the bruise damage of fruits or vegetables during the entire processing which will allow the introduction of exportation destines and consequently an increase in the economic profits in this sector.Keywords: apple, fruit damage, impact during crop and post-crop, mechanical characterization of the apple, numerical evaluation of fruit damage, electronic device
Procedia PDF Downloads 3051490 The Problem of the Use of Learning Analytics in Distance Higher Education: An Analytical Study of the Open and Distance University System in Mexico
Authors: Ismene Ithai Bras-Ruiz
Abstract:
Learning Analytics (LA) is employed by universities not only as a tool but as a specialized ground to enhance students and professors. However, not all the academic programs apply LA with the same goal and use the same tools. In fact, LA is formed by five main fields of study (academic analytics, action research, educational data mining, recommender systems, and personalized systems). These fields can help not just to inform academic authorities about the situation of the program, but also can detect risk students, professors with needs, or general problems. The highest level applies Artificial Intelligence techniques to support learning practices. LA has adopted different techniques: statistics, ethnography, data visualization, machine learning, natural language process, and data mining. Is expected that any academic program decided what field wants to utilize on the basis of his academic interest but also his capacities related to professors, administrators, systems, logistics, data analyst, and the academic goals. The Open and Distance University System (SUAYED in Spanish) of the University National Autonomous of Mexico (UNAM), has been working for forty years as an alternative to traditional programs; one of their main supports has been the employ of new information and communications technologies (ICT). Today, UNAM has one of the largest network higher education programs, twenty-six academic programs in different faculties. This situation means that every faculty works with heterogeneous populations and academic problems. In this sense, every program has developed its own Learning Analytic techniques to improve academic issues. In this context, an investigation was carried out to know the situation of the application of LA in all the academic programs in the different faculties. The premise of the study it was that not all the faculties have utilized advanced LA techniques and it is probable that they do not know what field of study is closer to their program goals. In consequence, not all the programs know about LA but, this does not mean they do not work with LA in a veiled or, less clear sense. It is very important to know the grade of knowledge about LA for two reasons: 1) This allows to appreciate the work of the administration to improve the quality of the teaching and, 2) if it is possible to improve others LA techniques. For this purpose, it was designed three instruments to determinate the experience and knowledge in LA. These were applied to ten faculty coordinators and his personnel; thirty members were consulted (academic secretary, systems manager, or data analyst, and coordinator of the program). The final report allowed to understand that almost all the programs work with basic statistics tools and techniques, this helps the administration only to know what is happening inside de academic program, but they are not ready to move up to the next level, this means applying Artificial Intelligence or Recommender Systems to reach a personalized learning system. This situation is not related to the knowledge of LA, but the clarity of the long-term goals.Keywords: academic improvements, analytical techniques, learning analytics, personnel expertise
Procedia PDF Downloads 1281489 Multisensory Science, Technology, Engineering and Mathematics Learning: Combined Hands-on and Virtual Science for Distance Learners of Food Chemistry
Authors: Paulomi Polly Burey, Mark Lynch
Abstract:
It has been shown that laboratory activities can help cement understanding of theoretical concepts, but it is difficult to deliver such an activity to an online cohort and issues such as occupational health and safety in the students’ learning environment need to be considered. Chemistry, in particular, is one of the sciences where practical experience is beneficial for learning, however typical university experiments may not be suitable for the learning environment of a distance learner. Food provides an ideal medium for demonstrating chemical concepts, and along with a few simple physical and virtual tools provided by educators, analytical chemistry can be experienced by distance learners. Food chemistry experiments were designed to be carried out in a home-based environment that 1) Had sufficient scientific rigour and skill-building to reinforce theoretical concepts; 2) Were safe for use at home by university students and 3) Had the potential to enhance student learning by linking simple hands-on laboratory activities with high-level virtual science. Two main components of the resources were developed, a home laboratory experiment component, and a virtual laboratory component. For the home laboratory component, students were provided with laboratory kits, as well as a list of supplementary inexpensive chemical items that they could purchase from hardware stores and supermarkets. The experiments used were typical proximate analyses of food, as well as experiments focused on techniques such as spectrophotometry and chromatography. Written instructions for each experiment coupled with video laboratory demonstrations were used to train students on appropriate laboratory technique. Data that students collected in their home laboratory environment was collated across the class through shared documents, so that the group could carry out statistical analysis and experience a full laboratory experience from their own home. For the virtual laboratory component, students were able to view a laboratory safety induction and advised on good characteristics of a home laboratory space prior to carrying out their experiments. Following on from this activity, students observed laboratory demonstrations of the experimental series they would carry out in their learning environment. Finally, students were embedded in a virtual laboratory environment to experience complex chemical analyses with equipment that would be too costly and sensitive to be housed in their learning environment. To investigate the impact of the intervention, students were surveyed before and after the laboratory series to evaluate engagement and satisfaction with the course. Students were also assessed on their understanding of theoretical chemical concepts before and after the laboratory series to determine the impact on their learning. At the end of the intervention, focus groups were run to determine which aspects helped and hindered learning. It was found that the physical experiments helped students to understand laboratory technique, as well as methodology interpretation, particularly if they had not been in such a laboratory environment before. The virtual learning environment aided learning as it could be utilized for longer than a typical physical laboratory class, thus allowing further time on understanding techniques.Keywords: chemistry, food science, future pedagogy, STEM education
Procedia PDF Downloads 1681488 Quantitative Analysis of Camera Setup for Optical Motion Capture Systems
Authors: J. T. Pitale, S. Ghassab, H. Ay, N. Berme
Abstract:
Biomechanics researchers commonly use marker-based optical motion capture (MoCap) systems to extract human body kinematic data. These systems use cameras to detect passive or active markers placed on the subject. The cameras use triangulation methods to form images of the markers, which typically require each marker to be visible by at least two cameras simultaneously. Cameras in a conventional optical MoCap system are mounted at a distance from the subject, typically on walls, ceiling as well as fixed or adjustable frame structures. To accommodate for space constraints and as portable force measurement systems are getting popular, there is a need for smaller and smaller capture volumes. When the efficacy of a MoCap system is investigated, it is important to consider the tradeoff amongst the camera distance from subject, pixel density, and the field of view (FOV). If cameras are mounted relatively close to a subject, the area corresponding to each pixel reduces, thus increasing the image resolution. However, the cross section of the capture volume also decreases, causing reduction of the visible area. Due to this reduction, additional cameras may be required in such applications. On the other hand, mounting cameras relatively far from the subject increases the visible area but reduces the image quality. The goal of this study was to develop a quantitative methodology to investigate marker occlusions and optimize camera placement for a given capture volume and subject postures using three-dimension computer-aided design (CAD) tools. We modeled a 4.9m x 3.7m x 2.4m (LxWxH) MoCap volume and designed a mounting structure for cameras using SOLIDWORKS (Dassault Systems, MA, USA). The FOV was used to generate the capture volume for each camera placed on the structure. A human body model with configurable posture was placed at the center of the capture volume on CAD environment. We studied three postures; initial contact, mid-stance, and early swing. The human body CAD model was adjusted for each posture based on the range of joint angles. Markers were attached to the model to enable a full body capture. The cameras were placed around the capture volume at a maximum distance of 2.7m from the subject. We used the Camera View feature in SOLIDWORKS to generate images of the subject as seen by each camera and the number of markers visible to each camera was tabulated. The approach presented in this study provides a quantitative method to investigate the efficacy and efficiency of a MoCap camera setup. This approach enables optimization of a camera setup through adjusting the position and orientation of cameras on the CAD environment and quantifying marker visibility. It is also possible to compare different camera setup options on the same quantitative basis. The flexibility of the CAD environment enables accurate representation of the capture volume, including any objects that may cause obstructions between the subject and the cameras. With this approach, it is possible to compare different camera placement options to each other, as well as optimize a given camera setup based on quantitative results.Keywords: motion capture, cameras, biomechanics, gait analysis
Procedia PDF Downloads 3101487 Household Knowledge, Attitude, and Determinants in Solid Waste Segregation: The Case of Sfax City
Authors: Leila Kharrat, Younes Boujelbene
Abstract:
In recent decades, solid waste management (SWM) has become a global concern because rapid population growth and overexploitation of non-renewable resources have generated enormous amounts of waste far exceeding carrying capacity; too, it poses serious threats to the environment and health. However, it is still difficult to combat the growing amount of solid waste before assessing the condition of people. Therefore, this study was conducted to assess the knowledge, attitudes, perception, and practices on the separation of solid waste in Sfax City. Nowadays, GDS is essential for sustainable development, hence the need for intensive research. Respondents from seven different districts in the city of Sfax were analyzed through a questionnaire survey with 342 households. This paper presents a qualitative exploratory study on the behavior of the citizens in the field of waste separation. The objective knows the antecedents of waste separation and the representation that individuals have about sorting waste on a specific territory which presents some characteristics regarding waste management in Sfax city. Source separation is not widely practiced and people usually sweep their places throwing waste components into the streets or neighboring plots. The results also indicate that participation in solid waste separation activities depends on the level of awareness of separating activities in the area, household income and educational level. It is, therefore, argued that increasing quality of municipal service is the best means of promoting positive attitudes to solid waste separation activities. One of the effective strategies identified by households that can be initiated by policymakers to increase the rate of participation in separation activities and eventually encourage them to participate in recycling activities is to provide a financial incentive in all residential areas in Sfax city.Keywords: solid waste management, waste separation, public policy, econometric modelling
Procedia PDF Downloads 2371486 CO2 Emission and Cost Optimization of Reinforced Concrete Frame Designed by Performance Based Design Approach
Authors: Jin Woo Hwang, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
As greenhouse effect has been recognized as serious environmental problem of the world, interests in carbon dioxide (CO2) emission which comprises major part of greenhouse gas (GHG) emissions have been increased recently. Since construction industry takes a relatively large portion of total CO2 emissions of the world, extensive studies about reducing CO2 emissions in construction and operation of building have been carried out after the 2000s. Also, performance based design (PBD) methodology based on nonlinear analysis has been robustly developed after Northridge Earthquake in 1994 to assure and assess seismic performance of building more exactly because structural engineers recognized that prescriptive code based design approach cannot address inelastic earthquake responses directly and assure performance of building exactly. Although CO2 emissions and PBD approach are recent rising issues on construction industry and structural engineering, there were few or no researches considering these two issues simultaneously. Thus, the objective of this study is to minimize the CO2 emissions and cost of building designed by PBD approach in structural design stage considering structural materials. 4 story and 4 span reinforced concrete building optimally designed to minimize CO2 emissions and cost of building and to satisfy specific seismic performance (collapse prevention in maximum considered earthquake) of building satisfying prescriptive code regulations using non-dominated sorting genetic algorithm-II (NSGA-II). Optimized design result showed that minimized CO2 emissions and cost of building were acquired satisfying specific seismic performance. Therefore, the methodology proposed in this paper can be used to reduce both CO2 emissions and cost of building designed by PBD approach.Keywords: CO2 emissions, performance based design, optimization, sustainable design
Procedia PDF Downloads 4061485 C-eXpress: A Web-Based Analysis Platform for Comparative Functional Genomics and Proteomics in Human Cancer Cell Line, NCI-60 as an Example
Authors: Chi-Ching Lee, Po-Jung Huang, Kuo-Yang Huang, Petrus Tang
Abstract:
Background: Recent advances in high-throughput research technologies such as new-generation sequencing and multi-dimensional liquid chromatography makes it possible to dissect the complete transcriptome and proteome in a single run for the first time. However, it is almost impossible for many laboratories to handle and analysis these “BIG” data without the support from a bioinformatics team. We aimed to provide a web-based analysis platform for users with only limited knowledge on bio-computing to study the functional genomics and proteomics. Method: We use NCI-60 as an example dataset to demonstrate the power of the web-based analysis platform and data delivering system: C-eXpress takes a simple text file that contain the standard NCBI gene or protein ID and expression levels (rpkm or fold) as input file to generate a distribution map of gene/protein expression levels in a heatmap diagram organized by color gradients. The diagram is hyper-linked to a dynamic html table that allows the users to filter the datasets based on various gene features. A dynamic summary chart is generated automatically after each filtering process. Results: We implemented an integrated database that contain pre-defined annotations such as gene/protein properties (ID, name, length, MW, pI); pathways based on KEGG and GO biological process; subcellular localization based on GO cellular component; functional classification based on GO molecular function, kinase, peptidase and transporter. Multiple ways of sorting of column and rows is also provided for comparative analysis and visualization of multiple samples.Keywords: cancer, visualization, database, functional annotation
Procedia PDF Downloads 6191484 Chi Square Confirmation of Autonomic Functions Percentile Norms of Indian Sportspersons Withdrawn from Competitive Games and Sports
Authors: Pawan Kumar, Dhananjoy Shaw, Manoj Kumar Rathi
Abstract:
Purpose of the study were to compare between (a) frequencies among the four quartiles of percentile norms of autonomic variables from power events and (b) frequencies among the four quartiles percentile norms of autonomic variables from aerobic events of Indian sportspersons withdrawn from competitive games and sports in regard to number of samples falling in each quartile. The study was conducted on 430 males of 30 to 35 years of age. Based on the nature of game/sports the retired sportspersons were classified into power events (throwers, judo players, wrestlers, short distance swimmers, cricket fast bowlers and power lifters) and aerobic events (long distance runners, long distance swimmers, water polo players). Date was collected using ECG polygraphs. Data were processed and extracted using frequency domain analysis and time domain analysis. Collected data were computed with frequency, percentage of each quartile and finally the frequencies were compared with the chi square analysis. The finding pertaining to norm reference comparison of frequencies among the four quartiles of Indian sportspersons withdrawn from competitive games and sports from (a) power events suggests that frequency distribution in four quartile namely Q1, Q2, Q3, and Q4 are significantly different at .05 level in regard to variables namely, SDNN, Total Power (Absolute Power), HF (Absolute Power), LF (Normalized Power), HF (Normalized Power), LF/HF ratio, deep breathing test, expiratory respiratory ratio, valsalva manoeuvre, hand grip test, cold pressor test and lying to standing test, whereas, insignificantly different at .05 level in regard to variables namely, SDSD, RMSSD, SDANN, NN50 Count, pNN50 Count, LF (Absolute Power) and 30: 15 Ratio (b) aerobic events suggests that frequency distribution in four quartile are significantly different at .05 level in regard to variables namely, SDNN, LF (Normalized Power), HF (Normalized Power), LF/HF ratio, deep breathing test, expiratory respiratory ratio, hand grip test, cold pressor test, lying to standing test and 30: 15 ratio, whereas, insignificantly different at .05 level in regard to variables namely, SDSD, RMSSD. SDANN, NN50 count, pNN50 count, Total Power (Absolute Power), LF(Absolute Power) HF(Absolute Power), and valsalva manoeuvre. The study concluded that comparison of frequencies among the four quartiles of Indian retired sportspersons from power events and aerobic events are different in four quartiles in regard to selected autonomic functions, hence the developed percentile norms are not homogenously distributed across the percentile scale; hence strengthen the percentage distribution towards normal distribution.Keywords: power, aerobic, absolute power, normalized power
Procedia PDF Downloads 3541483 Finite Element-Based Stability Analysis of Roadside Settlements Slopes from Barpak to Yamagaun through Laprak Village of Gorkha, an Epicentral Location after the 7.8Mw 2015 Barpak, Gorkha, Nepal Earthquake
Authors: N. P. Bhandary, R. C. Tiwari, R. Yatabe
Abstract:
The research employs finite element method to evaluate the stability of roadside settlements slopes from Barpak to Yamagaon through Laprak village of Gorkha, Nepal after the 7.8Mw 2015 Barpak, Gorkha, Nepal earthquake. It includes three major villages of Gorkha, i.e., Barpak, Laprak and Yamagaun that were devastated by 2015 Gorkhas’ earthquake. The road head distance from the Barpak to Laprak and Laprak to Yamagaun are about 14 and 29km respectively. The epicentral distance of main shock of magnitude 7.8 and aftershock of magnitude 6.6 were respectively 7 and 11 kilometers (South-East) far from the Barpak village nearer to Laprak and Yamagaon. It is also believed that the epicenter of the main shock as said until now was not in the Barpak village, it was somewhere near to the Yamagaun village. The chaos that they had experienced during the earthquake in the Yamagaun was much more higher than the Barpak. In this context, we have carried out a detailed study to investigate the stability of Yamagaun settlements slope as a case study, where ground fissures, ground settlement, multiple cracks and toe failures are the most severe. In this regard, the stability issues of existing settlements and proposed road alignment, on the Yamagaon village slope are addressed, which is surrounded by many newly activated landslides. Looking at the importance of this issue, field survey is carried out to understand the behavior of ground fissures and multiple failure characteristics of the slopes. The results suggest that the Yamgaun slope in Profile 2-2, 3-3 and 4-4 are not safe enough for infrastructure development even in the normal soil slope conditions as per 2, 3 and 4 material models; however, the slope seems quite safe for at Profile 1-1 for all 4 material models. The result also indicates that the first three profiles are marginally safe for 2, 3 and 4 material models respectively. The Profile 4-4 is not safe enough for all 4 material models. Thus, Profile 4-4 needs a special care to make the slope stable.Keywords: earthquake, finite element method, landslide, stability
Procedia PDF Downloads 3481482 Identification of Clay Mineral for Determining Reservoir Maturity Levels Based on Petrographic Analysis, X-Ray Diffraction and Porosity Test on Penosogan Formation Karangsambung Sub-District Kebumen Regency Central Java
Authors: Ayu Dwi Hardiyanti, Bernardus Anggit Winahyu, I. Gusti Agung Ayu Sugita Sari, Lestari Sutra Simamora, I. Wayan Warmada
Abstract:
The Penosogan Formation sandstone, that has Middle Miosen age, has been deemed as a reservoir potential based on sample data from sandstone outcrop in Kebakalan and Kedawung villages, Karangsambung sub-district, Kebumen Regency, Central Java. This research employs the following analytical methods; petrography, X-ray diffraction (XRD), and porosity test. Based on the presence of micritic sandstone, muddy micrite, and muddy sandstone, the Penosogan Formation sandstone has a fine-coarse granular size and middle-to-fine sorting. The composition of the sandstone is mostly made up of plagioclase, skeletal grain, and traces of micrite. The percentage of clay minerals based on petrographic analysis is 10% and appears to envelop grain, resulting enveloping grain which reduces the porosity of rocks. The porosity types as follows: interparticle, vuggy, channel, and shelter, with an equant form of cement. Moreover, the diagenesis process involves compaction, cementation, authigenic mineral growth, and dissolving due to feldspar alteration. The maturity of the reservoir can be seen through the X-ray diffraction analysis results, using ethylene glycol solution for clay minerals fraction transformed from smectite–illite. Porosity test analysis showed that the Penosogan Formation sandstones has a porosity value of 22% based on the Koeseomadinata classification, 1980. That shows high maturity is very influential for the quality of reservoirs sandstone of the Penosogan Formation.Keywords: sandstone reservoir, Penosogan Formation, smectite, XRD
Procedia PDF Downloads 1741481 Municipal Solid Waste (MSW) Composition and Generation in Nablus City, Palestine
Authors: Issam A. Al-Khatib
Abstract:
In order to achieve a significant reduction of waste amount flowing into landfills, it is important to first understand the composition of the solid municipal waste generated. Hence a detailed analysis of municipal solid waste composition has been conducted in Nablus city. The aim is to provide data on the potential recyclable fractions in the actual waste stream, with a focus on the plastic fraction. Hence, waste-sorting campaigns were conducted on mixed waste containers from five districts in Nablus city. The districts vary in terms of infrastructure and average income. The target is to obtain representative data about the potential quantity and quality of household plastic waste. The study has measured the composition of municipal solid waste collected/ transported by Nablus municipality. The analysis was done by categorizing the samples into eight primary fractions (organic and food waste, paper and cardboard, glass, metals, textiles, plastic, a fine fraction (<10 mm), and others). The study results reveal that the MSW stream in Nablus city has a significant bio- and organic waste fraction (about 68% of the total MSW). The second largest fraction is paper and cardboard (13.6%), followed by plastics (10.1%), textiles (3.2%), glass (1.9%), metals (1.8%), a fine fraction (0.5%), and other waste (0.3%). After this complete and detailed characterization of MSW collected in Nablus and taking into account the content of biodegradable organic matter, the composting could be a solution for the city of Nablus where the surrounding areas of Nablus city have agricultural activities and could be a natural outlet to the compost product. Different waste management options could be practiced in the future in addition to composting, such as energy recovery and recycling, which result in a greater possibility of reducing substantial amounts that are disposed of at landfills.Keywords: developing countries, composition, management, recyclable, waste.
Procedia PDF Downloads 90