Search results for: sequential pattern mining
1085 Framing Mahsa Amini and Iran Protest: A Comparative Analysis of Tehran times and the Wall Street Journal
Authors: Nimmy Maria Joseph, Muhammed Hafiludheen
Abstract:
On September 16, a 22-year-old Iranian woman, Mahsa Amini, died in Tehran after she was arrested by the ‘Morality police’ for an accusation of not wearing a hijab according to the standards laid down by the Iran Government. Suspicions aroused as the incident happened while Mahsa Amini was under the custody of Iran police. People of Iran accused that she was severely beaten up by the police, which led to her death. This initiated an array of women-led protests in Iran, leading to the ignition of massive uproars in the country. The Law Enforcement Command of Iran reported that she collapsed due to a heart attack and not due to police brutality. However, as a result, Iran faced a series of conflicts between the Government of Iran and the civilians, especially women. The research paper presents the framing analysis of online news stories on Mahsa Amini’s death and the resultant protest in Iran. The researcher analysed the online news stories of two popular newspapers, Tehran Times (Iran) and The Wall Street Journal (USA). The focus of the study is to have a comparative analysis of the frames of the news stories used and find out their agenda-setting pattern. It helps to comprehend how the news stories of popular news organisations try to channelise the perception of their audience on social issues. The researcher analysed the news stories considering their frames, valence, polysemy, rhetoric devices, and technical devices.Keywords: mahsa amini, iran protest, framing analysis, valence, rhetoric device, tehran times, the wall street journal
Procedia PDF Downloads 991084 Bio-Hub Ecosystems: Investment Risk Analysis Using Monte Carlo Techno-Economic Analysis
Authors: Kimberly Samaha
Abstract:
In order to attract new types of investors into the emerging Bio-Economy, new methodologies to analyze investment risk are needed. The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding the use of biomass as a feedstock for power plants. This study looked at repurposing existing biomass-energy plants into Circular Zero-Waste Bio-Hub Ecosystems. A Bio-Hub model that first targets a ‘whole-tree’ approach and then looks at the circular economics of co-hosting diverse industries (wood processing, aquaculture, agriculture) in the vicinity of the Biomass Power Plants facilities. This study modeled the economics and risk strategies of cradle-to-cradle linkages to incorporate the value-chain effects on capital/operational expenditures and investment risk reductions using a proprietary techno-economic model that incorporates investment risk scenarios utilizing the Monte Carlo methodology. The study calculated the sequential increases in profitability for each additional co-host on an operating forestry-based biomass energy plant in West Enfield, Maine. Phase I starts with the base-line of forestry biomass to electricity only and was built up in stages to include co-hosts of a greenhouse and a land-based shrimp farm. Phase I incorporates CO2 and heat waste streams from the operating power plant in an analysis of lowering and stabilizing the operating costs of the agriculture and aquaculture co-hosts. Phase II analysis incorporated a jet-fuel biorefinery and its secondary slip-stream of biochar which would be developed into two additional bio-products: 1) A soil amendment compost for agriculture and 2) A biochar effluent filter for the aquaculture. The second part of the study applied the Monte Carlo risk methodology to illustrate how co-location derisks investment in an integrated Bio-Hub versus individual investments in stand-alone projects of energy, agriculture or aquaculture. The analyzed scenarios compared reductions in both Capital and Operating Expenditures, which stabilizes profits and reduces the investment risk associated with projects in energy, agriculture, and aquaculture. The major findings of this techno-economic modeling using the Monte Carlo technique resulted in the masterplan for the first Bio-Hub to be built in West Enfield, Maine. In 2018, the site was designated as an economic opportunity zone as part of a Federal Program, which allows for Capital Gains tax benefits for investments on the site. Bioenergy facilities are currently at a critical juncture where they have an opportunity to be repurposed into efficient, profitable and socially responsible investments, or be idled and scrapped. The Bio-hub Ecosystems techno-economic analysis model is a critical model to expedite new standards for investments in circular zero-waste projects. Profitable projects will expedite adoption and advance the critical transition from the current ‘take-make-dispose’ paradigm inherent in the energy, forestry and food industries to a more sustainable Bio-Economy paradigm that supports local and rural communities.Keywords: bio-economy, investment risk, circular design, economic modelling
Procedia PDF Downloads 1011083 Image Recognition Performance Benchmarking for Edge Computing Using Small Visual Processing Unit
Authors: Kasidis Chomrat, Nopasit Chakpitak, Anukul Tamprasirt, Annop Thananchana
Abstract:
Internet of Things devices or IoT and Edge Computing has become one of the biggest things happening in innovations and one of the most discussed of the potential to improve and disrupt traditional business and industry alike. With rises of new hang cliff challenges like COVID-19 pandemic that posed a danger to workforce and business process of the system. Along with drastically changing landscape in business that left ruined aftermath of global COVID-19 pandemic, looming with the threat of global energy crisis, global warming, more heating global politic that posed a threat to become new Cold War. How emerging technology like edge computing and usage of specialized design visual processing units will be great opportunities for business. The literature reviewed on how the internet of things and disruptive wave will affect business, which explains is how all these new events is an effect on the current business and how would the business need to be adapting to change in the market and world, and example test benchmarking for consumer marketed of newer devices like the internet of things devices equipped with new edge computing devices will be increase efficiency and reducing posing a risk from a current and looming crisis. Throughout the whole paper, we will explain the technologies that lead the present technologies and the current situation why these technologies will be innovations that change the traditional practice through brief introductions to the technologies such as cloud computing, edge computing, Internet of Things and how it will be leading into future.Keywords: internet of things, edge computing, machine learning, pattern recognition, image classification
Procedia PDF Downloads 1551082 Description of Anthracotheriidae Remains from the Middle and Upper Siwaliks of Punjab, Pakistan
Authors: Abdul M. Khan, Ayesha Iqbal
Abstract:
In this paper, new dental remains of Merycopotamus (Anthracotheriidae) are described. The specimens were collected during field work by the authors from the well dated fossiliferous locality 'Hasnot' belonging to the Dhok Pathan Formation, and from 'Tatrot' village belonging to Tatrot Formation of the Potwar Plateau, Pakistan. The stratigraphic age of the Neogene deposits around Hasnot is 7 - 5 Ma; whereas the age of the Tatrot Formation is from 3.4 - 2.6 Ma. The newly discovered material when compared with the previous records of the genus Merycopotamus from the Siwaliks led us to identify all the three reported species of this genus from the Siwaliks of Pakistan. As the sample comprises only the dental remains so the identification of the specimens is solely based upon the morpho-metric analysis. The occlusal pattern of the upper molar in Merycopotamus dissimilis is different from Merycopotamus medioximus and Merycopotamus nanus in having a mesostyle fully divided, forming two prominent cusps, while mesostyle in M. medioximus is partly divided and small lateral crests are present on the mesostyle. A continuous loop like mesostyle is present in Merycopotamus nanus. The entoconid fold is present in Merycopotamus dissimilis on the lower molars whereas it is absent in Merycopotamus medioximus and Merycopotamus nanus. The hypoconulid in M. dissimilis is relatively simple but a loop like hypoconulid is present in M. medioximus and M. nanus. The results of the present findings are in line with the previous records of the genus Merycopotamus, with M. nanus, M. medioximus and M. dissimilis in the Late Miocene – Early Pliocene Dhok Pathan Formation, and M. dissimilis in the Late Pliocene Tatrot sediments of Pakistan.Keywords: Dhok Pathan, late miocene, merycopotamus, pliocene, Tatrot
Procedia PDF Downloads 2421081 Transformation of Positron Emission Tomography Raw Data into Images for Classification Using Convolutional Neural Network
Authors: Paweł Konieczka, Lech Raczyński, Wojciech Wiślicki, Oleksandr Fedoruk, Konrad Klimaszewski, Przemysław Kopka, Wojciech Krzemień, Roman Shopa, Jakub Baran, Aurélien Coussat, Neha Chug, Catalina Curceanu, Eryk Czerwiński, Meysam Dadgar, Kamil Dulski, Aleksander Gajos, Beatrix C. Hiesmayr, Krzysztof Kacprzak, łukasz Kapłon, Grzegorz Korcyl, Tomasz Kozik, Deepak Kumar, Szymon Niedźwiecki, Dominik Panek, Szymon Parzych, Elena Pérez Del Río, Sushil Sharma, Shivani Shivani, Magdalena Skurzok, Ewa łucja Stępień, Faranak Tayefi, Paweł Moskal
Abstract:
This paper develops the transformation of non-image data into 2-dimensional matrices, as a preparation stage for classification based on convolutional neural networks (CNNs). In positron emission tomography (PET) studies, CNN may be applied directly to the reconstructed distribution of radioactive tracers injected into the patient's body, as a pattern recognition tool. Nonetheless, much PET data still exists in non-image format and this fact opens a question on whether they can be used for training CNN. In this contribution, the main focus of this paper is the problem of processing vectors with a small number of features in comparison to the number of pixels in the output images. The proposed methodology was applied to the classification of PET coincidence events.Keywords: convolutional neural network, kernel principal component analysis, medical imaging, positron emission tomography
Procedia PDF Downloads 1431080 Cultural Heritage, Manga, and Film: Japanese Tourism at Petit Trianon, Versailles
Authors: Denise C. I. Maior-Barron
Abstract:
This conference presentation proposes to discuss the Japanese tourist perception of Marie Antoinette, at the heritage site which represents the home par excellence of the last Queen of France: Petit Trianon, Versailles. The underpinning analysis has a two-fold aim of firstly identifying the elements that contributed at the said perception and secondly of placing this in the wider context of tabi (travel) culture. The contribution of the presentation lies in its relevance to the analysis of postmodern trends of Japanese travel culture in relation to the consumption of European cultural heritage, through an insight into Japanese contemporary perception of heritage sites and their associated historical figures subject to controversy. Based upon the author’s doctoral studies field research at Petit Trianon - survey led in situ between 2010-2012, applied with the questionnaire method on a total of 307 respondents out of which 53 Japanese nationals - the media sources that were revealed to have had a direct influence on these nationals’ perception of Marie Antoinette, were Riyoko Ikeda’s shōjo manga La Rose de Versailles (1972) and Sofia Coppola’s film Marie-Antoinette (2006). The interpretation of the survey results through an assessment of visitor discourse determined the research methodology to be qualitative as opposed to quantitative, thus what confirmed the empirical hypothesis of the survey was a pattern of perception instead of percentages. Consequently, the interpretation focused on the answers to the questions relating to the image of Marie Antoinette in relation to historical knowledge, cultural background and last but not least media influences.Keywords: cultural heritage, manga, film, tabi
Procedia PDF Downloads 4371079 A Study of High Viscosity Oil-Gas Slug Flow Using Gamma Densitometer
Authors: Y. Baba, A. Archibong-Eso, H. Yeung
Abstract:
Experimental study of high viscosity oil-gas flows in horizontal pipelines published in literature has indicated that hydrodynamic slug flow is the dominant flow pattern observed. Investigations have shown that hydrodynamic slugging brings about high instabilities in pressure that can damage production facilities thereby making it inherent to study high viscous slug flow regime so as to improve the understanding of its flow dynamics. Most slug flow models used in the petroleum industry for the design of pipelines together with their closure relationships were formulated based on observations of low viscosity liquid-gas flows. New experimental investigations and data are therefore required to validate these models. In cases where these models underperform, improving upon or building new predictive models and correlations will also depend on the new experimental dataset and further understanding of the flow dynamics in high viscous oil-gas flows. In this study conducted at the Flow laboratory, Oil and Gas Engineering Centre of Cranfield University, slug flow variables such as pressure gradient, mean liquid holdup, frequency and slug length for oil viscosity ranging from 1..0 – 5.5 Pa.s are experimentally investigated and analysed. The study was carried out in a 0.076m ID pipe, two fast sampling gamma densitometer and pressure transducers (differential and point) were used to obtain experimental measurements. Comparison of the measured slug flow parameters to the existing slug flow prediction models available in the literature showed disagreement with high viscosity experimental data thus highlighting the importance of building new predictive models and correlations.Keywords: gamma densitometer, mean liquid holdup, pressure gradient, slug frequency and slug length
Procedia PDF Downloads 3291078 Numerical Analysis of Core-Annular Blood Flow in Microvessels at Low Reynolds Numbers
Authors: L. Achab, F. Iachachene
Abstract:
In microvessels, red blood cells (RBCs) exhibit a tendency to migrate towards the vessel center, establishing a core-annular flow pattern. The core region, marked by a high concentration of RBCs, is governed by significantly non-Newtonian viscosity. Conversely, the annular layer, composed of cell-free plasma, is characterized by Newtonian low viscosity. This property enables the plasma layer to act as a lubricant for the vessel walls, efficiently reducing resistance to the movement of blood cells. In this study, we investigate the factors influencing blood flow in microvessels and the thickness of the annular plasma layer using a non-miscible fluids approach in a 2D axisymmetric geometry. The governing equations of an incompressible unsteady flow are solved numerically through the Volume of Fluid (VOF) method to track the interface between the two immiscible fluids. To model blood viscosity in the core region, we adopt the Quemada constitutive law which is accurately captures the shear-thinning blood rheology over a wide range of shear rates. Our results are then compared to an established theoretical approach under identical flow conditions, particularly concerning the radial velocity profile and the thickness of the annular plasma layer. The simulation findings for low Reynolds numbers, demonstrate a notable agreement with the theoretical solution, emphasizing the pivotal role of blood’s rheological properties in the core region in determining the thickness of the annular plasma layer.Keywords: core-annular flows, microvessels, Quemada model, plasma layer thickness, volume of fluid method
Procedia PDF Downloads 561077 Working Memory Growth from Kindergarten to First Grade: Considering Impulsivity, Parental Discipline Methods and Socioeconomic Status
Authors: Ayse Cobanoglu
Abstract:
Working memory can be defined as a workspace that holds and regulates active information in mind. This study investigates individual changes in children's working memory from kindergarten to first grade. The main purpose of the study is whether parental discipline methods and child impulsive/overactive behaviors affect children's working memory initial status and growth rate, controlling for gender, minority status, and socioeconomic status (SES). A linear growth curve model with the first four waves of the Early Childhood Longitudinal Study-Kindergarten Cohort of 2011 (ECLS-K:2011) is performed to analyze the individual growth of children's working memory longitudinally (N=3915). Results revealed that there is a significant variation among students' initial status in the kindergarten fall semester as well as the growth rate during the first two years of schooling. While minority status, SES, and children's overactive/impulsive behaviors influenced children's initial status, only SES and minority status were significantly associated with the growth rate of working memory. For parental discipline methods, such as giving a warning and ignoring the child's negative behavior, are also negatively associated with initial working memory scores. Following that, students' working memory growth rate is examined, and students with lower SES as well as minorities showed a faster growth pattern during the first two years of schooling. However, the findings of parental disciplinary methods on working memory growth rates were mixed. It can be concluded that schooling helps low-SES minority students to develop their working memory.Keywords: growth curve modeling, impulsive/overactive behaviors, parenting, working memory
Procedia PDF Downloads 1351076 Effectiveness of Micro-Credit Scheme of Community Women and Development (COWAD) in Enhancing Living Standards of Women in Oyo State, Nigeria
Authors: Olufunmilayo Folaranmi
Abstract:
The study aimed at assessing the effectiveness of micro-credit scheme of (COWAD) in enhancing the living standard of women in selected local government areas of Oyo State. A survey research design was adopted for the study. A sample of 250 respondents was purposively selected for the study while a structured questionnaire tagged Effectiveness of Micro-Credit Scheme of Community Women and Development and Living Standards of Women Questionnaire (EMCSCWDQ) was designed to collect data for the study. Data collected was analyzed using frequency distribution, tables, percentages and chi-square statistics. Three hypotheses were tested for the study at 0.05 level of significance. Findings from the study indicated that loan provided by COWAD for women in selected local government areas towards improving their economic conditions has improved the living conditions of the women, promoted their general welfare, and reduced their poverty level. Findings also showed that some beneficiaries were not able to pay back, therefore reducing the effectiveness for future beneficiaries. Based on the findings, it was recommended that the providers of various micro-credit schemes of the state should design a convenient pattern of payment which will provide enough time for the beneficiaries of the loan to sell their goods or work for proper and timely payment. Also, the problem of collateral should be reviewed as the majority of women involved are poor. Other recommendations include replication of COWAD facilities in other NGOs as well as sustainability of the facility.Keywords: micro-credit scheme, welfare, women, development, poverty
Procedia PDF Downloads 1631075 Impact of Teacher’s Behavior in Class Room on Socialization and Mental Health of School Children: A Student’s Perspective
Authors: Umaiza Bashir, Ushna Farukh
Abstract:
The present study examined the perspective of school students regarding teacher’s behavioral pattern during a teaching in classroom and its influence on the students’ socialization particularly forming peer relationships with the development of emotional, behavioral problems in school children. To study these dimension of teacher-student classroom relationship, 210 school children (105 girls and 105 boys) within the age range of 14 to 18 years were taken from the government, private schools. The cross-sectional research design was used in which stratified random sampling was done. Teacher-student interaction scale was used to assess the teacher-student relationship in the classroom, which had two factors such as positive and negative interaction. Peer relationship scale was administered to investigate the socialization of students, and School Children Problem Scale was also given to the participants to explore their emotional, behavioral issues. The analysis of Pearson correlation showed that there is a significant positive relationship between negative teacher-student interaction and student’s emotional-behavioral as well as social problems. Another analysis of t-test revealed that boys perceived more positive interaction with teachers than girls (p < 0.01). Girls showed more emotional behavioral problems than boys (p < 0.001) Linear regression explained that age, gender, negative teacher’s interaction with students and victimization in social gathering predicts mental health problems in school children. This study suggests and highlights the need for the school counselors for the better mental health of students and teachers.Keywords: teacher-student interaction, school psychology, student’s emotional behavioral problems
Procedia PDF Downloads 1681074 Comparison of Visio-spatial Intelligence Between Amateur Rugby and Netball Players Using a Hand-Eye Coordination Specific Visual Test Battery
Authors: Lourens Millard, Gerrit Jan Breukelman, Nonkululeko Mathe
Abstract:
Aim: The research aims to investigate the differences in visio-spatial skills (VSS) between athletes and non-athletes, as well as variations across sports, presenting conflicting findings. Therefore, the objective of this study was to determine if there exist significant differences in visio-spatial intelligence skills between rugby players and netball players, and whether such disparities are present when comparing both groups to non-athletes. Methods: Participants underwent an optometric assessment, followed by an evaluation of VSS using six established tests: the Hart Near Far Rock, saccadic eye movement, evasion, accumulator, flash memory, and ball wall toss tests. Results: The results revealed that rugby players significantly outperformed netball players in speed of recognition, peripheral awareness, and hand-eye coordination (p=.000). Moreover, both rugby players and netball players performed significantly better than non-athletes in five of the six tests (p=.000), with the exception being the visual memory test (p=.809). Conclusion: This discrepancy in performance suggests that certain VSS are superior in athletes compared to non-athletes, highlighting potential implications for theories of vision, test selection, and the development of sport-specific VSS testing batteries. Furthermore, the use of a hand-eye coordination-specific VSS test battery effectively differentiated between different sports. However, this pattern was not consistent across all VSS tests, indicating that further research should explore the training methods employed by both sports, as these factors may contribute to the observed differences.Keywords: visio-spatial intelligence (VSI), rugby vision, netball vision, visual skills, sport vision.
Procedia PDF Downloads 501073 Integrated Approach of Knowledge Economy and Society in the Perspective of Higher Education Institutions
Authors: S. K. Ashiquer Rahman
Abstract:
Innovation, sustainability, and higher education are vital issues of the knowledge economy and society. In fact, the concentration on these issues, educators and researchers convinced the learners to prepare productive citizens for the knowledge economy and society, and many initiatives have been launched worldwide. The concept of a knowledge economy requires simultaneous and balanced progress in three dimensions (Innovation, Education and Sustainability) which are totally interdependent and correlated. The paper discusses the importance of an integrated approach to the knowledge economy and society from the perspective of higher education institutions. It remarks on the advent of a knowledge-based economy and society and the need for the combination of Innovation, sustainability, and education. This paper introduces nine (9) important issues or challenges of higher education institutions that have emphasized, cross-linked each other, and combined in a new education system that can form a new generation for the completive world as well as able to manage the knowledge-based economy and societal system. Moreover, the education system must be the foundation for building the necessary knowledge-based economy and society, which must manage the innovation process through a more sustainable world. In this viewpoint, Innovation, sustainability and higher education are becoming more and more central in our economy and society, and it is directly associated with the possibility of global wealth distribution to the economy and society. The objective of this research is to demonstrate the knowledge-based economy and social paradigm in order to create the opportunity for higher education institutions' development. The paper uses the collective action methodologies to examine “the mechanisms and strategies” used by higher education institutions’ authority to accommodate an integrated pattern as per connecting behaviors of knowledge economy and society. The paper accomplishes that the combination of Innovation, sustainability and education is a very helpful approach to building a knowledge-based economy and society for practicing the higher education institution’s challenges.Keywords: education, innovation, knowledge economy, sustainability
Procedia PDF Downloads 1051072 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach
Authors: James Ladzekpo
Abstract:
Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.Keywords: diabetes, machine learning, prediction, biomarkers
Procedia PDF Downloads 551071 The Impact of Social Media to Indonesian Muslim Fashion Trend
Authors: Siti Dewi Aisyah
Abstract:
Islamic Muslim fashion has become a trend in Indonesia. It is said that social media has a huge impact on its development. Indonesia is ranked among the most users of social media. That is why people who wear hijab also use social media for different purposes, one of this is to introduce hijab fashion. Consequently, they are becoming famous in social media. Social media has become a tool for communicating their beliefs as a Muslim as well as personal branding as a good hijabi yet with a fashionable style. This research will examine the social media such as Blog and Instagram, how it triggers the consumer culture to hijabi, what is the actual meaning behind of their feed posts in their social media, how they produce good photograph in their social media and for what reason they use social media. This research had been conducted through in-depth interviews with several bloggers who created Hijabers Community who have made a new trend in Muslim fashion and also Instagrammers who made their feeds as a style inspiration. The methodology used for this research is by analyzing Blog and Instagram through visual analysis that also examines the semiotic meaning behind the photographs that are posted by the people on the social media especially about the Islamic Modest Fashion trend. The theoretical framework for this research is about studying social media that is examined through visual analysis. The Muslim fashion trend was lead by several bloggers and continued to Instagram which then created a consumption pattern. From colourful colors, pastel colors, monochrome colors to neutral coffee tone colors, it was influenced by the Muslim fashion designers that also become digital influencers in social media. It was concluded that social media had been a powerful promotional and effective tool to change the trend in Indonesian Muslim Fashion trend.Keywords: blog, instagram, consumer culture, muslim fashion, social media, visual analysis
Procedia PDF Downloads 3661070 Application of Aquatic Plants for the Remediation of Organochlorine Pesticides from Keenjhar Lake
Authors: Soomal Hamza, Uzma Imran
Abstract:
Organochlorine pesticides bio-accumulate into the fat of fish, birds, and animals through which it enters the human food cycle. Due to their persistence and stability in the environment, many health impacts are associated with them, most of which are carcinogenic in nature. In this study, the level of organochlorine pesticides has been detected in Keenjhar Lake and remediated using Rhizoremediation technique. 14 OC pesticides namely, Aldrin, Deldrin, Heptachlor, Heptachlor epoxide, Endrin, Endosulfun I and II, DDT, DDE, DDD, Alpha, Beta, Gamma BHC and two plants namely, Water Hyacinth and Slvinia Molesta were used in the system using pot experiment which processed for 11 days. A consortium was inoculated in both plants to increase its efficiency. Water samples were processed using liquide-liquid extraction. Sediments and roots samples were processed using Soxhlet method followed by clean-up and Gas Chromatography. Delta-BHC was the predominantly found in all samples with mean concentration (ppb) and standard deviation of 0.02 ± 0.14, 0.52 ± 0.68, 0.61 ± 0.06, in Water, Sediments and Roots samples respectively. The highest levels were of Endosulfan II in the samples of water, sediments and roots. Water Hyacinth proved to be better bioaccumulaor as compared to Silvinia Molesta. The pattern of compounds reduction rate by the end of experiment was Delta-BHC>DDD > Alpha-BHC > DDT> Heptachlor> H.Epoxide> Deldrin> Aldrin> Endrin> DDE> Endosulfun I > Endosulfun II. Not much significant difference was observed between the pots with the consortium and pots without the consortium addition. Phytoremediation is a promising technique, but more studies are required to assess the bioremediation potential of different aquatic plants and plant-endophyte relationship.Keywords: aquatic plant, bio remediation, gas chromatography, liquid liquid extraction
Procedia PDF Downloads 1491069 Two-Dimensional Observation of Oil Displacement by Water in a Petroleum Reservoir through Numerical Simulation and Application to a Petroleum Reservoir
Authors: Ahmad Fahim Nasiry, Shigeo Honma
Abstract:
We examine two-dimensional oil displacement by water in a petroleum reservoir. The pore fluid is immiscible, and the porous media is homogenous and isotropic in the horizontal direction. Buckley-Leverett theory and a combination of Laplacian and Darcy’s law are used to study the fluid flow through porous media, and the Laplacian that defines the dispersion and diffusion of fluid in the sand using heavy oil is discussed. The reservoir is homogenous in the horizontal direction, as expressed by the partial differential equation. Two main factors which are observed are the water saturation and pressure distribution in the reservoir, and they are evaluated for predicting oil recovery in two dimensions by a physical and mathematical simulation model. We review the numerical simulation that solves difficult partial differential reservoir equations. Based on the numerical simulations, the saturation and pressure equations are calculated by the iterative alternating direction implicit method and the iterative alternating direction explicit method, respectively, according to the finite difference assumption. However, to understand the displacement of oil by water and the amount of water dispersion in the reservoir better, an interpolated contour line of the water distribution of the five-spot pattern, that provides an approximate solution which agrees well with the experimental results, is also presented. Finally, a computer program is developed to calculate the equation for pressure and water saturation and to draw the pressure contour line and water distribution contour line for the reservoir.Keywords: numerical simulation, immiscible, finite difference, IADI, IDE, waterflooding
Procedia PDF Downloads 3311068 Assessment of Urban Heat Island through Remote Sensing in Nagpur Urban Area Using Landsat 7 ETM+ Satellite Images
Authors: Meenal Surawar, Rajashree Kotharkar
Abstract:
Urban Heat Island (UHI) is found more pronounced as a prominent urban environmental concern in developing cities. To study the UHI effect in the Indian context, the Nagpur urban area has been explored in this paper using Landsat 7 ETM+ satellite images through Remote Sensing and GIS techniques. This paper intends to study the effect of LU/LC pattern on daytime Land Surface Temperature (LST) variation, contributing UHI formation within the Nagpur Urban area. Supervised LU/LC area classification was carried to study urban Change detection using ENVI 5. Change detection has been studied by carrying Normalized Difference Vegetation Index (NDVI) to understand the proportion of vegetative cover with respect to built-up ratio. Detection of spectral radiance from the thermal band of satellite images was processed to calibrate LST. Specific representative areas on the basis of urban built-up and vegetation classification were selected for observation of point LST. The entire Nagpur urban area shows that, as building density increases with decrease in vegetation cover, LST increases, thereby causing the UHI effect. UHI intensity has gradually increased by 0.7°C from 2000 to 2006; however, a drastic increase has been observed with difference of 1.8°C during the period 2006 to 2013. Within the Nagpur urban area, the UHI effect was formed due to increase in building density and decrease in vegetative cover.Keywords: land use/land cover, land surface temperature, remote sensing, urban heat island
Procedia PDF Downloads 2821067 Testing the Possibility of Healthy Individuals to Mimic Fatigability in Multiple Sclerotic Patients
Authors: Emmanuel Abban Sagoe
Abstract:
A proper functioning of the Central Nervous System ensures that we are able to accomplish just about everything we do as human beings such as walking, breathing, running, etc. Myelinated neurons throughout the body which transmit signals at high speeds facilitate these actions. In the case of MS, the body’s immune system attacks the myelin sheath surrounding the neurons and overtime destroys the myelin sheaths. Depending upon where the destruction occurs in the brain symptoms can vary from person to person. Fatigue is, however, the biggest problem encountered by an MS sufferer. It is very often described as the bedrock upon which other symptoms of MS such challenges in balance and coordination, dizziness, slurred speech, etc. may occur. Classifying and distinguishing between perceptions based fatigue and performance based fatigability is key to identifying appropriate treatment options for patients. Objective methods for assessing motor fatigability is also key to providing clinicians and physiotherapist with critical information on the progression of the symptom. This study tested if the Fatigue Index Kliniken Schmieder assessment tool can detect fatigability as seen in MS patients when healthy subjects with no known history of neurological pathology mimic abnormal gaits. Thirty three healthy adults between ages 18-58years volunteered as subjects for the study. The subjects, strapped with RehaWatch sensors on both feet, completed 6 gait protocols of normal and mimicked fatigable gaits for 60 seconds per each gait and at 1.38889m/s treadmill speed following clear instructions given.Keywords: attractor attributes, fatigue index Kliniken Schmieder, gait variability, movement pattern
Procedia PDF Downloads 1231066 Effect of Genuine Missing Data Imputation on Prediction of Urinary Incontinence
Authors: Suzan Arslanturk, Mohammad-Reza Siadat, Theophilus Ogunyemi, Ananias Diokno
Abstract:
Missing data is a common challenge in statistical analyses of most clinical survey datasets. A variety of methods have been developed to enable analysis of survey data to deal with missing values. Imputation is the most commonly used among the above methods. However, in order to minimize the bias introduced due to imputation, one must choose the right imputation technique and apply it to the correct type of missing data. In this paper, we have identified different types of missing values: missing data due to skip pattern (SPMD), undetermined missing data (UMD), and genuine missing data (GMD) and applied rough set imputation on only the GMD portion of the missing data. We have used rough set imputation to evaluate the effect of such imputation on prediction by generating several simulation datasets based on an existing epidemiological dataset (MESA). To measure how well each dataset lends itself to the prediction model (logistic regression), we have used p-values from the Wald test. To evaluate the accuracy of the prediction, we have considered the width of 95% confidence interval for the probability of incontinence. Both imputed and non-imputed simulation datasets were fit to the prediction model, and they both turned out to be significant (p-value < 0.05). However, the Wald score shows a better fit for the imputed compared to non-imputed datasets (28.7 vs. 23.4). The average confidence interval width was decreased by 10.4% when the imputed dataset was used, meaning higher precision. The results show that using the rough set method for missing data imputation on GMD data improve the predictive capability of the logistic regression. Further studies are required to generalize this conclusion to other clinical survey datasets.Keywords: rough set, imputation, clinical survey data simulation, genuine missing data, predictive index
Procedia PDF Downloads 1681065 Molecular Characterization and Identification of C-Type Lectin in Red Palm Weevil, Rhynchophorus ferrugineus Oliver
Authors: Hafiza Javaria Ashraf, Xinghong Wang, Zhanghong Shi, Youming Hou
Abstract:
Insect’s innate immunity depends on a variety of defense responses for the recognition of invading pathogens. Pathogen recognition involves particular proteins known as pattern recognition receptors (PRRs). These PRRs interact with pathogen-associated molecular patterns (PAMPs) present on the surface of pathogens to distinguish between self and non-self. C-type lectins (CTLs) belong to a superfamily of PPRs which involved in insect immunity and defense mechanism. Rhynchophorus ferrugineus Olivier is a devastating pest of Palm cultivations in China. Although studies on R. ferrugineus immune mechanism and host defense have conducted, however, the role of CTL in immune responses of R. ferrugineus remains elusive. Here, we report RfCTL, which is a secreted protein containing a single-CRD domain. The open reading frame (ORF) of CTL is 226 bp, which encodes a putative protein of 168 amino acids. Transcript expression analysis revealed that RfCTL highly expressed in immune-related tissues, i.e., hemolymph and fat body. The abundance of RfCTL in the gut and fat body dramatically increased upon Staphylococcus aureus and Escherichia coli bacterial challenges, suggesting a role in defense against gram-positive and gram-negative bacterial infection. Taken together, we inferred that RfCTL might be involved in the immune defense of R. ferrugineus and established a solid foundation for future studies on R. ferrugineus CTL domain proteins for better understanding of insect immunity.Keywords: biological invasion, c-type lectin, insect immunity, Rhynchophorus ferrugineus Oliver
Procedia PDF Downloads 1571064 A Novel Method for Isolation of Kaempferol and Quercetin from Podophyllum Hexandrum Rhizome
Authors: S. B. Bhandare, K. S. Laddha
Abstract:
Podphyllum hexandrum belonging to family berberidaceae has gained attention in phytochemical and pharmacological research as it shows excellent anticancer activity and has been used in treatment of skin diseases, sunburns and radioprotection. Chemically it contains lignans and flavonoids such as kaempferol, quercetin and their glycosides. Objective: To isolate and identify Kaempferol and Quercetin from Podophyllum rhizome. Method: The powdered rhizome of Podophyllum hexandrum was subjected to soxhlet extraction with methanol. This methanolic extract is used to obtain podophyllin. Podohyllin was extracted with ethyl acetate and this extract was then concentrated and subjected to column chromatography to obtain purified kaempferol and quercetin. Result: Isolated kaempferol, quercetin were light yellow and dark yellow in colour respectively. TLC of the isolated compounds was performed using chloroform: methanol (9:1) which showed single band on silica plate at Rf 0.6 and 0.4 for kaempferol and quercetin. UV spectrometric studies showed UV maxima (methanol) at 259, 360 nm and 260, 370 nm which are identical with standard kaempferol and quercetin respectively. Both IR spectra exhibited prominent absorption bands for free phenolic OH at 3277 and 3296.2 cm-1 and for conjugated C=O at 1597 and 1659.7 cm-1 respectively. The mass spectrum of kaempferol and quercetin showed (M+1) peak at m/z 287 and 303.09 respectively. 1H NMR analysis of both isolated compounds exhibited typical four-peak pattern of two doublets at δ 6.86 and δ 8.01 which was assigned to H-3’,5’ and H-2’,6’ respectively. Absence of signals less than δ 6.81 in the 1H NMR spectrum supported the aromatic nature of compound. Kaempferol and Quercetin showed 98.1% and 97% purity by HPLC at UV 370 nm. Conclusion: Easy and simple method for isolation of Kaempferol and Quercetin was developed and their structures were confirmed by UV, IR, NMR and mass studies. Method has shown good reproducibility, yield and purity.Keywords: flavonoids, kaempferol, podophyllum rhizome, quercetin
Procedia PDF Downloads 3041063 Legal Contestation of Non-Legal Norms: The Case of Humanitarian Intervention Norm between 1999 and 2018
Authors: Nazli Ustunes Demirhan
Abstract:
Norms of any nature are subject to pressures of change throughout their lifespans, as they are interpreted and re-interpreted every time they are used rhetorically or practically by international actors. The inevitable contestation of different interpretations may lead to an erosion of the norm, as well as to its strengthening. This paper aims to question the role of formal legality on the change of norm strength, using a norm contestation framework and a multidimensional norm strength conceptualization. It argues that the role of legality is not necessarily linked to the formal legal characteristics of a norm, but is about the legality of the contestation processes. In order to demonstrate this argument, the paper examines the evolutionary path of the humanitarian intervention norm as a case study. Humanitarian intervention, as a norm of very low formal legal characteristics, has been subject to numerous cycles of contestation, demonstrating a fluctuating pattern of norm strength. With the purpose of examining the existence and role of legality in the selected contestation periods from 1999 to 2017, this paper uses process tracing method with a detailed document analysis on the Security Council documents; including decisions, resolutions, meeting minutes, press releases as well as individual country statements. Through the empirical analysis, it is demonstrated that the legality of the contestation processes has a positive effect at least on the authoritativeness dimension of norm strength. This study tries to contribute to the developing dialogue between international relations (IR) and internal law (IL) disciplines with its better-tuned understanding of legality. It connects to further questions in IR/IL nexus, relating to the value added of norm legality, and politics of legalization as well as better international policies for norm reinforcement.Keywords: humanitarian intervention, legality, norm contestation, norm dynamics, responsibility to protect
Procedia PDF Downloads 1511062 Assessment of Landfill Pollution Load on Hydroecosystem by Use of Heavy Metal Bioaccumulation Data in Fish
Authors: Gintarė Sauliutė, Gintaras Svecevičius
Abstract:
Landfill leachates contain a number of persistent pollutants, including heavy metals. They have the ability to spread in ecosystems and accumulate in fish which most of them are classified as top-consumers of trophic chains. Fish are freely swimming organisms; but perhaps, due to their species-specific ecological and behavioral properties, they often prefer the most suitable biotopes and therefore, did not avoid harmful substances or environments. That is why it is necessary to evaluate the persistent pollutant dispersion in hydroecosystem using fish tissue metal concentration. In hydroecosystems of hybrid type (e.g. river-pond-river) the distance from the pollution source could be a perfect indicator of such a kind of metal distribution. The studies were carried out in the Kairiai landfill neighboring hybrid-type ecosystem which is located 5 km east of the Šiauliai City. Fish tissue (gills, liver, and muscle) metal concentration measurements were performed on two types of ecologically-different fishes according to their feeding characteristics: benthophagous (Gibel carp, roach) and predatory (Northern pike, perch). A number of mathematical models (linear, non-linear, using log and other transformations) have been applied in order to identify the most satisfactorily description of the interdependence between fish tissue metal concentration and the distance from the pollution source. However, the only one log-multiple regression model revealed the pattern that the distance from the pollution source is closely and positively correlated with metal concentration in all predatory fish tissues studied (gills, liver, and muscle).Keywords: bioaccumulation in fish, heavy metals, hydroecosystem, landfill leachate, mathematical model
Procedia PDF Downloads 2861061 Technical and Economic Analysis of Smart Micro-Grid Renewable Energy Systems: An Applicable Case Study
Authors: M. A. Fouad, M. A. Badr, Z. S. Abd El-Rehim, Taher Halawa, Mahmoud Bayoumi, M. M. Ibrahim
Abstract:
Renewable energy-based micro-grids are presently attracting significant consideration. The smart grid system is presently considered a reliable solution for the expected deficiency in the power required from future power systems. The purpose of this study is to determine the optimal components sizes of a micro-grid, investigating technical and economic performance with the environmental impacts. The micro grid load is divided into two small factories with electricity, both on-grid and off-grid modes are considered. The micro-grid includes photovoltaic cells, back-up diesel generator wind turbines, and battery bank. The estimated load pattern is 76 kW peak. The system is modeled and simulated by MATLAB/Simulink tool to identify the technical issues based on renewable power generation units. To evaluate system economy, two criteria are used: the net present cost and the cost of generated electricity. The most feasible system components for the selected application are obtained, based on required parameters, using HOMER simulation package. The results showed that a Wind/Photovoltaic (W/PV) on-grid system is more economical than a Wind/Photovoltaic/Diesel/Battery (W/PV/D/B) off-grid system as the cost of generated electricity (COE) is 0.266 $/kWh and 0.316 $/kWh, respectively. Considering the cost of carbon dioxide emissions, the off-grid will be competitive to the on-grid system as COE is found to be (0.256 $/kWh, 0.266 $/kWh), for on and off grid systems.Keywords: renewable energy sources, micro-grid system, modeling and simulation, on/off grid system, environmental impacts
Procedia PDF Downloads 2701060 Symmetric Key Encryption Algorithm Using Indian Traditional Musical Scale for Information Security
Authors: Aishwarya Talapuru, Sri Silpa Padmanabhuni, B. Jyoshna
Abstract:
Cryptography helps in preventing threats to information security by providing various algorithms. This study introduces a new symmetric key encryption algorithm for information security which is linked with the "raagas" which means Indian traditional scale and pattern of music notes. This algorithm takes the plain text as input and starts its encryption process. The algorithm then randomly selects a raaga from the list of raagas that is assumed to be present with both sender and the receiver. The plain text is associated with the thus selected raaga and an intermediate cipher-text is formed as the algorithm converts the plain text characters into other characters, depending upon the rules of the algorithm. This intermediate code or cipher text is arranged in various patterns in three different rounds of encryption performed. The total number of rounds in the algorithm is equal to the multiples of 3. To be more specific, the outcome or output of the sequence of first three rounds is again passed as the input to this sequence of rounds recursively, till the total number of rounds of encryption is performed. The raaga selected by the algorithm and the number of rounds performed will be specified at an arbitrary location in the key, in addition to important information regarding the rounds of encryption, embedded in the key which is known by the sender and interpreted only by the receiver, thereby making the algorithm hack proof. The key can be constructed of any number of bits without any restriction to the size. A software application is also developed to demonstrate this process of encryption, which dynamically takes the plain text as input and readily generates the cipher text as output. Therefore, this algorithm stands as one of the strongest tools for information security.Keywords: cipher text, cryptography, plaintext, raaga
Procedia PDF Downloads 2891059 Optimal Design of Storm Water Networks Using Simulation-Optimization Technique
Authors: Dibakar Chakrabarty, Mebada Suiting
Abstract:
Rapid urbanization coupled with changes in land use pattern results in increasing peak discharge and shortening of catchment time of concentration. The consequence is floods, which often inundate roads and inhabited areas of cities and towns. Management of storm water resulting from rainfall has, therefore, become an important issue for the municipal bodies. Proper management of storm water obviously includes adequate design of storm water drainage networks. The design of storm water network is a costly exercise. Least cost design of storm water networks assumes significance, particularly when the fund available is limited. Optimal design of a storm water system is a difficult task as it involves the design of various components, like, open or closed conduits, storage units, pumps etc. In this paper, a methodology for least cost design of storm water drainage systems is proposed. The methodology proposed in this study consists of coupling a storm water simulator with an optimization method. The simulator used in this study is EPA’s storm water management model (SWMM), which is linked with Genetic Algorithm (GA) optimization method. The model proposed here is a mixed integer nonlinear optimization formulation, which takes care of minimizing the sectional areas of the open conduits of storm water networks, while satisfactorily conveying the runoff resulting from rainfall to the network outlet. Performance evaluations of the developed model show that the proposed method can be used for cost effective design of open conduit based storm water networks.Keywords: genetic algorithm (GA), optimal design, simulation-optimization, storm water network, SWMM
Procedia PDF Downloads 2481058 Parametric Models of Facade Designs of High-Rise Residential Buildings
Authors: Yuchen Sharon Sung, Yingjui Tseng
Abstract:
High-rise residential buildings have become the most mainstream housing pattern in the world’s metropolises under the current trend of urbanization. The facades of high-rise buildings are essential elements of the urban landscape. The skins of these facades are important media between the interior and exterior of high- rise buildings. It not only connects between users and environments, but also plays an important functional and aesthetic role. This research involves a study of skins of high-rise residential buildings using the methodology of shape grammar to find out the rules which determine the combinations of the facade patterns and analyze the patterns’ parameters using software Grasshopper. We chose a number of facades of high-rise residential buildings as source to discover the underlying rules and concepts of the generation of facade skins. This research also provides the rules that influence the composition of facade skins. The items of the facade skins, such as windows, balconies, walls, sun visors and metal grilles are treated as elements in the system of facade skins. The compositions of these elements will be categorized and described by logical rules; and the types of high-rise building facade skins will be modelled by Grasshopper. Then a variety of analyzed patterns can also be applied on other facade skins through this parametric mechanism. Using these patterns established in the models, researchers can analyze each single item to do more detail tests and architects can apply each of these items to construct their facades for other buildings through various combinations and permutations. The goal of these models is to develop a mechanism to generate prototypes in order to facilitate generation of various facade skins.Keywords: facade skin, grasshopper, high-rise residential building, shape grammar
Procedia PDF Downloads 5091057 Hydrological Analysis for Urban Water Management
Authors: Ranjit Kumar Sahu, Ramakar Jha
Abstract:
Urban Water Management is the practice of managing freshwater, waste water, and storm water as components of a basin-wide management plan. It builds on existing water supply and sanitation considerations within an urban settlement by incorporating urban water management within the scope of the entire river basin. The pervasive problems generated by urban development have prompted, in the present work, to study the spatial extent of urbanization in Golden Triangle of Odisha connecting the cities Bhubaneswar (20.2700° N, 85.8400° E), Puri (19.8106° N, 85.8314° E) and Konark (19.9000° N, 86.1200° E)., and patterns of periodic changes in urban development (systematic/random) in order to develop future plans for (i) urbanization promotion areas, and (ii) urbanization control areas. Remote Sensing, using USGS (U.S. Geological Survey) Landsat8 maps, supervised classification of the Urban Sprawl has been done for during 1980 - 2014, specifically after 2000. This Work presents the following: (i) Time series analysis of Hydrological data (ground water and rainfall), (ii) Application of SWMM (Storm Water Management Model) and other soft computing techniques for Urban Water Management, and (iii) Uncertainty analysis of model parameters (Urban Sprawl and correlation analysis). The outcome of the study shows drastic growth results in urbanization and depletion of ground water levels in the area that has been discussed briefly. Other relative outcomes like declining trend of rainfall and rise of sand mining in local vicinity has been also discussed. Research on this kind of work will (i) improve water supply and consumption efficiency (ii) Upgrade drinking water quality and waste water treatment (iii) Increase economic efficiency of services to sustain operations and investments for water, waste water, and storm water management, and (iv) engage communities to reflect their needs and knowledge for water management.Keywords: Storm Water Management Model (SWMM), uncertainty analysis, urban sprawl, land use change
Procedia PDF Downloads 4251056 Screening Ecological Risk Assessment at an Old Abandoned Mine in Northern Taiwan
Authors: Hui-Chen Tsai, Chien-Jen Ho, Bo-Wei Power Liang, Ying Shen, Yi-Hsin Lai
Abstract:
Former Taiwan Metal Mining Corporation and its associated 3 wasted flue gas tunnels, hereinafter referred to as 'TMMC', was contaminated with heavy metals, Polychlorinated biphenyls (PCBs) and Total Petroleum Hydrocarbons (TPHs) in soil. Since the contamination had been exposed and unmanaged in the environment for more than 40 years, the extent of the contamination area is estimated to be more than 25 acres. Additionally, TMMC is located in a remote, mountainous area where almost no residents are residing in the 1-km radius area. Thus, it was deemed necessary to conduct an ecological risk assessment in order to evaluate the details of future contaminated site management plan. According to the winter and summer, ecological investigation results, one type of endangered, multiple vulnerable and near threaten plant was discovered, as well as numerous other protected species, such as Crested Serpent Eagle, Crested Goshawk, Black Kite, Brown Shrike, Taiwan Blue Magpie were observed. Ecological soil screening level (Eco-SSLs) developed by USEPA was adopted as a reference to conduct screening assessment. Since all the protected species observed surrounding TMMC site were birds, screening ecological risk assessment was conducted on birds only. The assessment was assessed mainly based on the chemical evaluation, which the contamination in different environmental media was compared directly with the ecological impact levels (EIL) of each evaluation endpoints and the respective hazard quotient (HQ) and hazard index (HI) could be obtained. The preliminary ecological risk assessment results indicated HI is greater than 1. In other words, the biological stressors (birds) were exposed to the contamination, which was already exceeded the dosage that could cause unacceptable impacts to the ecological system. This result was mainly due to the high concentration of arsenic, metal and lead; thus it was suggested the above mention contaminants should be remediated as soon as possible or proper risk management measures should be taken.Keywords: screening, ecological risk assessment, ecological impact levels, risk management
Procedia PDF Downloads 134