Search results for: combinatorial optimization problems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8900

Search results for: combinatorial optimization problems

1670 Using Short Learning Programmes to Develop Students’ Digital Literacies in Art and Design Education

Authors: B.J. Khoza, B. Kembo

Abstract:

Global socioeconomic developments and ever-growing technological advancements of the art and design industry indicate the pivotal importance of lifelong learning. There exists a discrepancy between competencies, personal ambition, and workplace requirements. There are few , if at all, institutions of higher learning in South Africa which offer Short Learning Programmes (SLP) in Art and Design Education. Traditionally, Art and Design education is delivered face to face via a hands-on approach. In this way the enduring perception among educators is that art and design education does not lend itself to online delivery. Short Learning programmes (SLP) are a concentrated approach to make revenue and lure potential prospective students to embark on further education study, this is often of weighted value to both students and employers. SLPs are used by Higher Education institutions to generate income in support of the core academic programmes. However, there is a gap in terms of the translation of art and design studio pedagogy into SLPs which provide quality education, are adaptable and delivered via a blended mode. In our paper, we propose a conceptual framework drawing on secondary research to analyse existing research to SLPs for arts and design education. We aim to indicate a new dimension to the process of using a design-based research approach for short learning programmes in art and design education. The study draws on a conceptual framework, a qualitative analysis through the lenses of Herrington, McKenney, Reeves and Oliver (2005) principles of the design-based research approach. The results of this study indicate that design-based research is not only an effective methodological approach for developing and deploying arts and design education curriculum for 1st years in Higher Education context but it also has the potential to guide future research. The findings of this study propose that the design-based research approach could bring theory and praxis together regarding a common purpose to design context-based solutions to educational problems.

Keywords: design education, design-based research, digital literacies, multi-literacies, short learning programme

Procedia PDF Downloads 136
1669 Students' Online Evaluation: Impact on the Polytechnic University of the Philippines Faculty's Performance

Authors: Silvia C. Ambag, Racidon P. Bernarte, Jacquelyn B. Buccahi, Jessica R. Lacaron, Charlyn L. Mangulabnan

Abstract:

This study aimed to answer the query, “What is the impact of Students Online Evaluation on PUP Faculty’s Performance?” The problem of the study was resolve through the objective of knowing the perceived impact of students’ online evaluation on PUP faculty’s performance. The objectives were carried through the application of quantitative research design and by conducting survey research method. The researchers utilized primary and secondary data. Primary data was gathered from the self-administered survey and secondary data was collected from the books, articles on both print-out and online materials and also other theses related study. Findings revealed that PUP faculty in general stated that students’ online evaluation made a highly positive impact on their performance based on their ‘Knowledge of Subject’ and ‘Teaching for Independent Learning’, giving a highest mean of 3.62 and 3.60 respectively., followed by the faculty’s performance which gained an overall means of 3.55 and 3.53 are based on their ‘Commitment’ and ‘Management of Learning’. From the findings, the researchers concluded that Students’ online evaluation made a ‘Highly Positive’ impact on PUP faculty’s performance based on all Four (4) areas. Furthermore, the study’s findings reveal that PUP faculty encountered many problems regarding the students’ online evaluation; the impact of the Students’ Online Evaluation is significant when it comes to the employment status of the faculty; and most of the PUP faculty recommends reviewing the PUP Online Survey for Faculty Evaluation for improvement. Hence, the researchers recommend the PUP Administration to revisit and revise the PUP Online Survey for Faculty Evaluation, specifically review the questions and make a set of questions that will be appropriate to the discipline or field of the faculty. Also, the administration should fully orient the students about the importance, purpose and impact of online faculty evaluation. And lastly, the researchers suggest the PUP Faculty to continue their positive performance and continue on being cooperative with the administrations’ purpose of addressing the students’ concerns and for the students, the researchers urged them to take the online faculty evaluation honestly and objectively.

Keywords: on-line Evaluation, faculty, performance, Polytechnic University of the Philippines (PUP)

Procedia PDF Downloads 382
1668 Cross-Sectional Association between Socio-Demographic Factors and Paid Blood Donation in Half Million Chinese Population

Authors: Jiashu Shen, Guoting Zhang, Zhicheng Wang, Yu Wang, Yun Liang, Siyu Zou, Fan Yang, Kun Tang

Abstract:

Objectives: This study aims to enhance the understanding of paid blood donors’ characteristics in Chinese population and devise strategies to protect these paid donors. Background: Paid blood donation was the predominant mode of blood donation in China from the 1970s to 1998 and caused several health and social problems including largely increased the risk of infectious diseases with nonstandard operation in unhygienic conditions. Methods: This study utilized the cross-sectional data from the China Kadoorie Biobank with about 0.5 million people from 10 regions of China from 2004 to 2008. Multivariable logistic regression was performed to examine the associations between socio-demographic factors and paid blood donation. Furthermore, a stratified analysis was applied in education level and annual household income by rural and urban areas. Results: The prevalence of paid blood donation was 0.50% in China and males were more likely to donate blood than females (Adjusted odds ratio (AOR) =0.81, 95%Confident Intervals (CI): 0.75-0.88). Urban people had much lower odds than rural people (AOR =0.24, 95%CI: 0.21-0.27). People with a high annual household income had lower odds of paid blood donation compared with that of people with low income (AOR=0.37, 95%CI: 0.31-0.44). Compared with people who didn’t receive school education, people in a higher level of education had increased odds of paid blood donation (AOR=2.31, 95%CI: 1.94-2.74). Conclusion: Paid blood donors in China were associated with those who were males, living in rural areas, with low annual household income and educational background.

Keywords: China Kadoorie Biobank, Chinese population, paid blood donation, socio-demographic factors

Procedia PDF Downloads 132
1667 Development of Automated Quality Management System for the Management of Heat Networks

Authors: Nigina Toktasynova, Sholpan Sagyndykova, Zhanat Kenzhebayeva, Maksat Kalimoldayev, Mariya Ishimova, Irbulat Utepbergenov

Abstract:

Any business needs a stable operation and continuous improvement, therefore it is necessary to constantly interact with the environment, to analyze the work of the enterprise in terms of employees, executives and consumers, as well as to correct any inconsistencies of certain types of processes and their aggregate. In the case of heat supply organizations, in addition to suppliers, local legislation must be considered which often is the main regulator of pricing of services. In this case, the process approach used to build a functional organizational structure in these types of businesses in Kazakhstan is a challenge not only in the implementation, but also in ways of analyzing the employee's salary. To solve these problems, we investigated the management system of heating enterprise, including strategic planning based on the balanced scorecard (BSC), quality management in accordance with the standards of the Quality Management System (QMS) ISO 9001 and analysis of the system based on expert judgment using fuzzy inference. To carry out our work we used the theory of fuzzy sets, the QMS in accordance with ISO 9001, BSC according to the method of Kaplan and Norton, method of construction of business processes according to the notation IDEF0, theory of modeling using Matlab software simulation tools and graphical programming LabVIEW. The results of the work are as follows: We determined possibilities of improving the management of heat-supply plant-based on QMS; after the justification and adaptation of software tool it has been used to automate a series of functions for the management and reduction of resources and for the maintenance of the system up to date; an application for the analysis of the QMS based on fuzzy inference has been created with novel organization of communication software with the application enabling the analysis of relevant data of enterprise management system.

Keywords: balanced scorecard, heat supply, quality management system, the theory of fuzzy sets

Procedia PDF Downloads 346
1666 Role of Microplastics on Reducing Heavy Metal Pollution from Wastewater

Authors: Derin Ureten

Abstract:

Plastic pollution does not disappear, it gets smaller and smaller through photolysis which are caused mainly by sun’s radiation, thermal oxidation, thermal degradation, and biodegradation which is the action of organisms digesting larger plastics. All plastic pollutants have exceedingly harmful effects on the environment. Together with the COVID-19 pandemic, the number of plastic products such as masks and gloves flowing into the environment has increased more than ever. However, microplastics are not the only pollutants in water, one of the most tenacious and toxic pollutants are heavy metals. Heavy metal solutions are also capable of causing varieties of health problems in organisms such as cancer, organ damage, nervous system damage, and even death. The aim of this research is to prove that microplastics can be used in wastewater treatment systems by proving that they could adsorb heavy metals in solutions. Experiment for this research will include two heavy metal solutions; one including microplastics in a heavy metal contaminated water solution, and one that just includes heavy metal solution. After being sieved, absorbance of both mediums will be measured with the help of a spectrometer. Iron (III) chloride (FeCl3) will be used as the heavy metal solution since the solution becomes darker as the presence of this substance increases. The experiment will be supported by Pure Nile Red powder in order to observe if there are any visible differences under the microscope. Pure Nile Red powder is a chemical that binds to hydrophobic materials such as plastics and lipids. If proof of adsorbance could be observed by the rates of the solutions' final absorbance rates and visuals ensured by the Pure Nile Red powder, the experiment will be conducted with different temperature levels in order to analyze the most accurate temperature level to proceed with removal of heavy metals from water. New wastewater treatment systems could be generated with the help of microplastics, for water contaminated with heavy metals.

Keywords: microplastics, heavy metal, pollution, adsorbance, wastewater treatment

Procedia PDF Downloads 60
1665 Biogas Production from Lake Bottom Biomass from Forest Management Areas

Authors: Dessie Tegegne Tibebu, Kirsi Mononen, Ari Pappinen

Abstract:

In areas with forest management, agricultural, and industrial activity, sediments and biomass are accumulated in lakes through drainage system, which might be a cause for biodiversity loss and health problems. One possible solution can be utilization of lake bottom biomass and sediments for biogas production. The main objective of this study was to investigate the potentials of lake bottom materials for production of biogas by anaerobic digestion and to study the effect of pretreatment methods for feed materials on biogas yield. In order to study the potentials of biogas production lake bottom materials were collected from two sites, Likokanta and Kutunjärvi lake. Lake bottom materials were mixed with straw-horse manure to produce biogas in a laboratory scale reactor. The results indicated that highest yields of biogas values were observed when feeds were composed of 50% lake bottom materials with 50% straw horse manure mixture-while with above 50% lake bottom materials in the feed biogas production decreased. CH4 content from Likokanta lake materials with straw-horse manure and Kutunjärvi lake materials with straw-horse manure were similar values when feed consisted of 50% lake bottom materials with 50% straw horse manure mixtures. However, feeds with lake bottom materials above 50%, the CH4 concentration started to decrease, impairing gas process. Pretreatment applied on Kutunjärvi lake materials showed a slight negative effect on the biogas production and lowest CH4 concentration throughout the experiment. The average CH4 production (ml g-1 VS) from pretreated Kutunjärvi lake materials with straw horse manure (208.9 ml g-1 VS) and untreated Kutunjärvi lake materials with straw horse manure (182.2 ml g-1 VS) were markedly higher than from Likokanta lake materials with straw horse manure (157.8 ml g-1 VS). According to the experimental results, utilization of 100% lake bottom materials for biogas production is likely to be impaired negatively. In the future, further analyses to improve the biogas yields, assessment of costs and benefits is needed before utilizing lake bottom materials for the production of biogas.

Keywords: anaerobic digestion, biogas, lake bottom materials, sediments, pretreatment

Procedia PDF Downloads 298
1664 Discrete-Event Modeling and Simulation Methodologies: Past, Present and Future

Authors: Gabriel Wainer

Abstract:

Modeling and Simulation methods have been used to better analyze the behavior of complex physical systems, and it is now common to use simulation as a part of the scientific and technological discovery process. M&S advanced thanks to the improvements in computer technology, which, in many cases, resulted in the development of simulation software using ad-hoc techniques. Formal M&S appeared in order to try to improve the development task of very complex simulation systems. Some of these techniques proved to be successful in providing a sound base for the development of discrete-event simulation models, improving the ease of model definition and enhancing the application development tasks; reducing costs and favoring reuse. The DEVS formalism is one of these techniques, which proved to be successful in providing means for modeling while reducing development complexity and costs. DEVS model development is based on a sound theoretical framework. The independence of M&S tasks made possible to run DEVS models on different environments (personal computers, parallel computers, real-time equipment, and distributed simulators) and middleware. We will present a historical perspective of discrete-event M&S methodologies, showing different modeling techniques. We will introduce DEVS origins and general ideas, and compare it with some of these techniques. We will then show the current status of DEVS M&S, and we will discuss a technological perspective to solve current M&S problems (including real-time simulation, interoperability, and model-centered development techniques). We will show some examples of the current use of DEVS, including applications in different fields. We will finally show current open topics in the area, which include advanced methods for centralized, parallel or distributed simulation, the need for real-time modeling techniques, and our view in these fields.

Keywords: modeling and simulation, discrete-event simulation, hybrid systems modeling, parallel and distributed simulation

Procedia PDF Downloads 301
1663 Design and Development of a Mechanical Force Gauge for the Square Watermelon Mold

Authors: Morteza Malek Yarand, Hadi Saebi Monfared

Abstract:

This study aimed at designing and developing a mechanical force gauge for the square watermelon mold for the first time. It also tried to introduce the square watermelon characteristics and its production limitations. The mechanical force gauge performance and the product itself were also described. There are three main designable gauge models: a. hydraulic gauge, b. strain gauge, and c. mechanical gauge. The advantage of the hydraulic model is that it instantly displays the pressure and thus the force exerted by the melon. However, considering the inability to measure forces at all directions, complicated development, high cost, possible hydraulic fluid leak into the fruit chamber and the possible influence of increased ambient temperature on the fluid pressure, the development of this gauge was overruled. The second choice was to calculate pressure using the direct force a strain gauge. The main advantage of these strain gauges over spring types is their high precision in measurements; but with regard to the lack of conformity of strain gauge working range with water melon growth, calculations were faced with problems. Finally the mechanical pressure gauge has advantages, including the ability to measured forces and pressures on the mold surface during melon growth; the ability to display the peak forces; the ability to produce melon growth graph thanks to its continuous force measurements; the conformity of its manufacturing materials with the required physical conditions of melon growth; high air conditioning capability; the ability to permit sunlight reaches the melon rind (no yellowish skin and quality loss); fast and straightforward calibration; no damages to the product during assembling and disassembling; visual check capability of the product within the mold; applicable to all growth environments (field, greenhouses, etc.); simple process; low costs and so forth.

Keywords: mechanical force gauge, mold, reshaped fruit, square watermelon

Procedia PDF Downloads 254
1662 Study of Mobile Game Addiction Using Electroencephalography Data Analysis

Authors: Arsalan Ansari, Muhammad Dawood Idrees, Maria Hafeez

Abstract:

Use of mobile phones has been increasing considerably over the past decade. Currently, it is one of the main sources of communication and information. Initially, mobile phones were limited to calls and messages, but with the advent of new technology smart phones were being used for many other purposes including video games. Despite of positive outcomes, addiction to video games on mobile phone has become a leading cause of psychological and physiological problems among many people. Several researchers examined the different aspects of behavior addiction with the use of different scales. Objective of this study is to examine any distinction between mobile game addicted and non-addicted players with the use of electroencephalography (EEG), based upon psycho-physiological indicators. The mobile players were asked to play a mobile game and EEG signals were recorded by BIOPAC equipment with AcqKnowledge as data acquisition software. Electrodes were places, following the 10-20 system. EEG was recorded at sampling rate of 200 samples/sec (12,000samples/min). EEG recordings were obtained from the frontal (Fp1, Fp2), parietal (P3, P4), and occipital (O1, O2) lobes of the brain. The frontal lobe is associated with behavioral control, personality, and emotions. The parietal lobe is involved in perception, understanding logic, and arithmetic. The occipital lobe plays a role in visual tasks. For this study, a 60 second time window was chosen for analysis. Preliminary analysis of the signals was carried out with Acqknowledge software of BIOPAC Systems. From the survey based on CGS manual study 2010, it was concluded that five participants out of fifteen were in addictive category. This was used as prior information to group the addicted and non-addicted by physiological analysis. Statistical analysis showed that by applying clustering analysis technique authors were able to categorize the addicted and non-addicted players specifically on theta frequency range of occipital area.

Keywords: mobile game, addiction, psycho-physiology, EEG analysis

Procedia PDF Downloads 138
1661 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique

Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina

Abstract:

The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.

Keywords: diffusion, glass-ceramics, ion exchange, vitrification

Procedia PDF Downloads 251
1660 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains

Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe

Abstract:

The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.

Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain

Procedia PDF Downloads 282
1659 Design and Assessment of Traffic Management Strategies for Improved Mobility on Major Arterial Roads in Lahore City

Authors: N. Ali, S. Nakayama, H. Yamaguchi, M. Nadeem

Abstract:

Traffic congestion is a matter of prime concern in developing countries. This can be primarily attributed due to poor design practices and biased allocation of resources based on political will neglecting the technical feasibilities in infrastructure design. During the last decade, Lahore has expanded at an unprecedented rate as compared to surrounding cities due to more funding and resource allocation by the previous governments. As a result of this, people from surrounding cities and areas moved to the Lahore city for better opportunities and quality of life. This migration inflow inherited the city with an increased population yielding the inefficiency of the existing infrastructure to accommodate enhanced traffic demand. This leads to traffic congestion on major arterial roads of the city. In this simulation study, a major arterial road was selected to evaluate the performance of the five intersections by changing the geometry of the intersections or signal control type. Simulations were done in two software; Highway Capacity Software (HCS) and Synchro Studio and Sim Traffic Software. Some of the traffic management strategies that were employed include actuated-signal control, semi-actuated signal control, fixed-time signal control, and roundabout. The most feasible solution for each intersection in the above-mentioned traffic management techniques was selected with the least delay time (seconds) and improved Level of Service (LOS). The results showed that Jinnah Hospital Intersection and Akbar Chowk Intersection improved 92.97% and 92.67% in delay time reduction, respectively. These results can be used by traffic planners and policy makers for decision making for the expansion of these intersections keeping in mind the traffic demand in future years.

Keywords: traffic congestion, traffic simulation, traffic management, congestion problems

Procedia PDF Downloads 449
1658 Decision Support System for Hospital Selection in Emergency Medical Services: A Discrete Event Simulation Approach

Authors: D. Tedesco, G. Feletti, P. Trucco

Abstract:

The present study aims to develop a Decision Support System (DSS) to support the operational decision of the Emergency Medical Service (EMS) regarding the assignment of medical emergency requests to Emergency Departments (ED). In the literature, this problem is also known as “hospital selection” and concerns the definition of policies for the selection of the ED to which patients who require further treatment are transported by ambulance. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning DSSs to support the EMS management and, in particular, the hospital selection decision. From the literature analysis, it emerged that current studies are mainly focused on the EMS phases related to the ambulance service and consider a process that ends when the ambulance is available after completing a request. Therefore, all the ED-related issues are excluded and considered as part of a separate process. Indeed, the most studied hospital selection policy turned out to be proximity, thus allowing to minimize the transport time and release the ambulance in the shortest possible time. The purpose of the present study consists in developing an optimization model for assigning medical emergency requests to the EDs, considering information relating to the subsequent phases of the process, such as the case-mix, the expected service throughput times, and the operational capacity of different EDs in hospitals. To this end, a Discrete Event Simulation (DES) model was created to evaluate different hospital selection policies. Therefore, the next steps of the research consisted of the development of a general simulation architecture, its implementation in the AnyLogic software and its validation on a realistic dataset. The hospital selection policy that produced the best results was the minimization of the Time To Provider (TTP), considered as the time from the beginning of the ambulance journey to the ED at the beginning of the clinical evaluation by the doctor. Finally, two approaches were further compared: a static approach, which is based on a retrospective estimate of the TTP, and a dynamic approach, which is based on a predictive estimate of the TTP determined with a constantly updated Winters model. Findings reveal that considering the minimization of TTP as a hospital selection policy raises several benefits. It allows to significantly reduce service throughput times in the ED with a minimum increase in travel time. Furthermore, an immediate view of the saturation state of the ED is produced and the case-mix present in the ED structures (i.e., the different triage codes) is considered, as different severity codes correspond to different service throughput times. Besides, the use of a predictive approach is certainly more reliable in terms of TTP estimation than a retrospective approach but entails a more difficult application. These considerations can support decision-makers in introducing different hospital selection policies to enhance EMSs performance.

Keywords: discrete event simulation, emergency medical services, forecast model, hospital selection

Procedia PDF Downloads 71
1657 Evaluation of Classification Algorithms for Diagnosis of Asthma in Iranian Patients

Authors: Taha SamadSoltani, Peyman Rezaei Hachesu, Marjan GhaziSaeedi, Maryam Zolnoori

Abstract:

Introduction: Data mining defined as a process to find patterns and relationships along data in the database to build predictive models. Application of data mining extended in vast sectors such as the healthcare services. Medical data mining aims to solve real-world problems in the diagnosis and treatment of diseases. This method applies various techniques and algorithms which have different accuracy and precision. The purpose of this study was to apply knowledge discovery and data mining techniques for the diagnosis of asthma based on patient symptoms and history. Method: Data mining includes several steps and decisions should be made by the user which starts by creation of an understanding of the scope and application of previous knowledge in this area and identifying KD process from the point of view of the stakeholders and finished by acting on discovered knowledge using knowledge conducting, integrating knowledge with other systems and knowledge documenting and reporting.in this study a stepwise methodology followed to achieve a logical outcome. Results: Sensitivity, Specifity and Accuracy of KNN, SVM, Naïve bayes, NN, Classification tree and CN2 algorithms and related similar studies was evaluated and ROC curves were plotted to show the performance of the system. Conclusion: The results show that we can accurately diagnose asthma, approximately ninety percent, based on the demographical and clinical data. The study also showed that the methods based on pattern discovery and data mining have a higher sensitivity compared to expert and knowledge-based systems. On the other hand, medical guidelines and evidence-based medicine should be base of diagnostics methods, therefore recommended to machine learning algorithms used in combination with knowledge-based algorithms.

Keywords: asthma, datamining, classification, machine learning

Procedia PDF Downloads 426
1656 Studies on Space-Based Laser Targeting System for the Removal of Orbital Space Debris

Authors: Krima M. Rohela, Raja Sabarinath Sundaralingam

Abstract:

Humans have been launching rockets since the beginning of the space age in the late 1950s. We have come a long way since then, and the success rate for the launch of rockets has increased considerably. With every successful launch, there is a large amount of junk or debris which is released into the upper layers of the atmosphere. Space debris has been a huge concern for a very long time now. This includes the rocket shells released from the launch and the parts of defunct satellites. Some of this junk will come to fall towards the Earth and burn in the atmosphere. But most of the junk goes into orbit around the Earth, and they remain in orbits for at least 100 years. This can cause a lot of problems to other functioning satellites and may affect the future manned missions to space. The main concern of the space-debris is the increase in space activities, which leads to risks of collisions if not taken care of soon. These collisions may result in what is known as Kessler Syndrome. This debris can be removed by a space-based laser targeting system. Hence, the matter is investigated and discussed. The first step in this involves launching a satellite with a high-power laser device into space, above the debris belt. Then the target material is ablated with a focussed laser beam. This step of the process is highly dependent on the attitude and orientation of the debris with respect to the Earth and the device. The laser beam will cause a jet of vapour and plasma to be expelled from the material. Hence, the force is applied in the opposite direction, and in accordance with Newton’s third law of motion, this will cause the material to move towards the Earth and get pulled down due to gravity, where it will get disintegrated in the upper layers of the atmosphere. The larger pieces of the debris can be directed towards the oceans. This method of removal of the orbital debris will enable safer passage for future human-crewed missions into space.

Keywords: altitude, Kessler syndrome, laser ablation, Newton’s third law of motion, satellites, Space debris

Procedia PDF Downloads 123
1655 On-Line Super Critical Fluid Extraction, Supercritical Fluid Chromatography, Mass Spectrometry, a Technique in Pharmaceutical Analysis

Authors: Narayana Murthy Akurathi, Vijaya Lakshmi Marella

Abstract:

The literature is reviewed with regard to online Super critical fluid extraction (SFE) coupled directly with supercritical fluid chromatography (SFC) -mass spectrometry that have typically more sensitive than conventional LC-MS/MS and GC-MS/MS. It is becoming increasingly interesting to use on-line techniques that combine sample preparation, separation and detection in one analytical set up. This provides less human intervention, uses small amount of sample and organic solvent and yields enhanced analyte enrichment in a shorter time. The sample extraction is performed under light shielding and anaerobic conditions, preventing the degradation of thermo labile analytes. It may be able to analyze compounds over a wide polarity range as SFC generally uses carbon dioxide which was collected as a by-product of other chemical reactions or is collected from the atmosphere as it contributes no new chemicals to the environment. The diffusion of solutes in supercritical fluids is about ten times greater than that in liquids and about three times less than in gases which results in a decrease in resistance to mass transfer in the column and allows for fast high resolution separations. The drawback of SFC when using carbon dioxide as mobile phase is that the direct introduction of water samples poses a series of problems, water must therefore be eliminated before it reaches the analytical column. Hundreds of compounds analysed simultaneously by simple enclosing in an extraction vessel. This is mainly applicable for pharmaceutical industry where it can analyse fatty acids and phospholipids that have many analogues as their UV spectrum is very similar, trace additives in polymers, cleaning validation can be conducted by putting swab sample in an extraction vessel, analysing hundreds of pesticides with good resolution.

Keywords: super critical fluid extraction (SFE), super critical fluid chromatography (SFC), LCMS/MS, GCMS/MS

Procedia PDF Downloads 369
1654 Mental Accounting Theory Development Review and Application

Authors: Kang-Hsien Li

Abstract:

Along with global industries in using technology to enhance the application, make the study drawn more close to the people’s behavior and produce data analysis, extended out from the mental accounting of prospect theory, this paper provides the marketing and financial applications in the field of exploration and discussions with the future. For the foreseeable future, the payment behavior depends on the form of currency, which affects a variety of product types on the marketing of marketing strategy to provide diverse payment methods to enhance the overall sales performance. This not only affects people's consumption also affects people's investments. Credit card, PayPal, Apple pay, Bitcoin and any other with advances in technology and other emerging payment instruments, began to affect people for the value and the concept of money. Such as the planning of national social welfare policies, monetary and financial regulators and regulators. The expansion can be expected to discuss marketing and finance-related mental problems at the same time, recent studies reflect two different ideas, the first idea is that individuals affected by situational frames, not broad impact at the event level, affected by the people basically mental, second idea is that when an individual event affects a broader range, and majority of people will choose the same at the time that the rational choice. That are applied to practical application of marketing, at the same time provide an explanation in the financial market under the anomalies, due to the financial markets has varied investment products and different market participants, that also highlights these two points. It would provide in-depth description of humanity's mental. Certainly, about discuss mental accounting aspects, while artificial intelligence application development, although people would be able to reduce prejudice decisions, that will also lead to more discussion on the economic and marketing strategy.

Keywords: mental accounting, behavior economics, consumer behaviors, decision-making

Procedia PDF Downloads 431
1653 Environmental Accounting Practice: Analyzing the Extent and Qualification of Environmental Disclosures of Turkish Companies Located in BIST-XKURY Index

Authors: Raif Parlakkaya, Mustafa Nihat Demirci, Mehmet Nuri Salur

Abstract:

Environmental pollution has detrimental effects on the quality of our life and its scope has reached such an extent that measures are being taken both at the national and international levels to reduce, prevent and mitigate its impact on social, economic and political spheres. Therefore, awareness of environmental problems has been increasing among stakeholders and accordingly among companies. It is seen that corporate reporting is expanding beyond environmental performance. Primary purpose of publishing an environmental report is to provide specific audiences with useful, meaningful information. This paper is intended to analyze the extent and qualification of environmental disclosures of Turkish publicly quoted firms and see how it varies from one sector to another. The data for the study were collected from annual activity reports of companies, listed on the corporate governance index (BIST-XKURY) of Istanbul Stock Exchange. Content analysis was the research methodology used to measure the extent of environmental disclosure. Accordingly, 2015 annual activity reports of companies that carry out business in some particular fields were acquired from Capital Market Board, websites of Public Disclosure Platform and companies’ own websites. These reports were categorized into five main aspects: Environmental policies, environmental management systems, environmental protection and conservation activities, environmental awareness and information on environmental lawsuits. Subsequently, each component was divided into several variables related to what each firm is supposed to disclose about environmental information. In this context, the nature and scope of the information disclosed on each item were assessed according to five different ways (N.I: No Information; G.E.: General Explanations; Q.E.: Qualitative Detailed Explanations; N.E.: Quantitative (numerical) Detailed Explanations; Q.&N.E.: Both Qualitative and Quantitative Explanations).

Keywords: environmental accounting, disclosure, corporate governance, content analysis

Procedia PDF Downloads 237
1652 Dynamics of Soil Fertility Management in India: An Empirical Analysis

Authors: B. Suresh Reddy

Abstract:

The over dependence on chemical fertilizers for nutrient management in crop production for the last few decades has led to several problems affecting soil health, environment and farmers themselves. Based on the field work done in 2012-13 with 1080 farmers of different size-classes in semi-arid regions of Uttar Pradesh, Jharkhand and Madhya Pradesh states of India, this paper reveals that the farmers in semi-arid regions of India are actively managing soil fertility and other soil properties through a wide range of practices that are based on local resources and knowledge. It also highlights the socio-economic web woven around these soil fertility management practices. This study highlights the contribution of organic matter by traditional soil fertility management practices in maintaining the soil health. Livestock has profound influence on the soil fertility enhancement through supply of organic manure. Empirical data of this study has clearly revealed how farmers’ soil fertility management options are being undermined by government policies that give more priority to chemical fertiliser-based strategies. Based on the findings it is argued that there should be a 'level playing field' for both organic and inorganic soil fertility management methods by promoting and supporting farmers in using organic methods. There is a need to provide credit to farmers for adopting his choice of soil fertility management methods which suits his socio-economic conditions and that best suits the long term productivity of soils. The study suggests that the government policies related to soil fertility management must be enabling, creating the conditions for development based more on locally available resources and local skills and knowledge. This will not only keep Indian soils in healthy condition but also support the livelihoods of millions of people, especially the small and marginal farmers.

Keywords: livestock, organic matter, small farmers, soil fertility

Procedia PDF Downloads 144
1651 Salmonella Emerging Serotypes in Northwestern Italy: Genetic Characterization by Pulsed-Field Gel Electrophoresis

Authors: Clara Tramuta, Floris Irene, Daniela Manila Bianchi, Monica Pitti, Giulia Federica Cazzaniga, Lucia Decastelli

Abstract:

This work presents the results obtained by the Regional Reference Centre for Salmonella Typing (CeRTiS) in a retrospective study aimed to investigate, through Pulsed-field Gel Electrophoresis (PFGE) analysis, the genetic relatedness of emerging Salmonella serotypes of human origin circulating in North-West of Italy. Furthermore, the goal of this work was to create a Regional database to facilitate foodborne outbreak investigation and to monitor them at an earlier stage. A total of 112 strains, isolated from 2016 to 2018 in hospital laboratories, were included in this study. The isolates were previously identified as Salmonella according to standard microbiological techniques and serotyping was performed according to ISO 6579-3 and the Kaufmann-White scheme using O and H antisera (Statens Serum Institut®). All strains were characterized by PFGE: analysis was conducted according to a standardized PulseNet protocol. The restriction enzyme XbaI was used to generate several distinguishable genomic fragments on the agarose gel. PFGE was performed on a CHEF Mapper system, separating large fragments and generating comparable genetic patterns. The agarose gel was then stained with GelRed® and photographed under ultraviolet transillumination. The PFGE patterns obtained from the 112 strains were compared using Bionumerics version 7.6 software with the Dice coefficient with 2% band tolerance and 2% optimization. For each serotype, the data obtained with the PFGE were compared according to the geographical origin and the year in which they were isolated. Salmonella strains were identified as follow: S. Derby n. 34; S. Infantis n. 38; S. Napoli n. 40. All the isolates had appreciable restricted digestion patterns ranging from approximately 40 to 1100 kb. In general, a fairly heterogeneous distribution of pulsotypes has emerged in the different provinces. Cluster analysis indicated high genetic similarity (≥ 83%) among strains of S. Derby (n. 30; 88%), S. Infantis (n. 36; 95%) and S. Napoli (n. 38; 95%) circulating in north-western Italy. The study underlines the genomic similarities shared by the emerging Salmonella strains in Northwest Italy and allowed to create a database to detect outbreaks in an early stage. Therefore, the results confirmed that PFGE is a powerful and discriminatory tool to investigate the genetic relationships among strains in order to monitoring and control Salmonellosis outbreak spread. Pulsed-field gel electrophoresis (PFGE) still represents one of the most suitable approaches to characterize strains, in particular for the laboratories for which NGS techniques are not available.

Keywords: emerging Salmonella serotypes, genetic characterization, human strains, PFGE

Procedia PDF Downloads 80
1650 The Prevalence and Associated Factors of Frailty and Its Relationship with Falls in Patients with Schizophrenia

Authors: Bo-Jian Wu, Si-Heng Wu

Abstract:

Objectives: Frailty is a condition of a person who has chronic health problems complicated by a loss of physiological reserve and deteriorating functional abilities. The frailty syndrome was defined by Fried and colleagues, i.e., weight loss, fatigue, decreased grip strength, slow gait speed, and low physical activity. However, to our best knowledge, there have been rare studies exploring the prevalence of frailty and its association with falls in patients with schizophrenia. Methods: A total of 559 hospitalized patients were recruited from a public psychiatric hospital in 2013. The majority of the subjects were males (361, 64.6%). The average age was 53.5 years. All patients received the assessment of frailty status defined by Fried and colleagues. The status of a fall within one year after the assessment of frailty, clinical and demographic data was collected from medical records. Logistic regression was used to calculate the odds ratio of associated factors. Results : A total of 9.2% of the participants met the criteria of frailty. The percentage of patients having a fall was 7.2%. Age were significantly associated with frailty (odds ratio = 1.057, 95% confidence interval = 1.025-1.091); however, sex was not associated with frailty (p = 0.17). After adjustment for age and sex, frailty status was associated with a fall (odds ratio = 3.62, 95% confidence interval = 1.58-8.28). Concerning the components of frailty, decreased grip strength (odds ratio = 2.44, 95% confidence interval = 1.16-5.14), slow gait speed (odds ratio = 2.82, 95% confidence interval = 1.21-6.53), and low physical activity (odds ratio = 2.64, 95% confidence interval = 1.21-5.78) were found to be associated with a fall. Conclusions: Our findings suggest the prevalence of frailty was about 10% in hospitalized patients with chronic patients with schizophrenia, and frailty status was significant with a fall in this group. By using the status of frailty, it may be beneficial to potential target candidates having fallen in the future as early as possible. The effective intervention of prevention of further falls may be given in advance. Our results bridge this gap and open a potential avenue for the prevention of falls in patients with schizophrenia. Frailty is certainly an important factor for maintaining wellbeing among these patients.

Keywords: fall, frailty, schizophrenia, Taiwan

Procedia PDF Downloads 135
1649 Development and Validation of Work Movement Task Analysis: Part 1

Authors: Mohd Zubairy Bin Shamsudin

Abstract:

Work-related Musculoskeletal Disorder (WMSDs) is one of the occupational health problems encountered by workers over the world. In Malaysia, there is increasing in trend over the years, particularly in the manufacturing sectors. Current method to observe workplace WMSDs is self-report questionnaire, observation and direct measurement. Observational method is most frequently used by the researcher and practitioner because of the simplified, quick and versatile when it applies to the worksite. However, there are some limitations identified e.g. some approach does not cover a wide spectrum of biomechanics activity and not sufficiently sensitive to assess the actual risks. This paper elucidates the development of Work Movement Task Analysis (WMTA), which is an observational tool for industrial practitioners’ especially untrained personnel to assess WMSDs risk factors and provide a basis for suitable intervention. First stage of the development protocol involved literature reviews, practitioner survey, tool validation and reliability. A total of six themes/comments were received in face validity stage. New revision of WMTA consisted of four sections of postural (neck, back, shoulder, arms, and legs) and associated risk factors; movement, load, coupling and basic environmental factors (lighting, noise, odorless, heat and slippery floor). For inter-rater reliability study shows substantial agreement among rater with K = 0.70. Meanwhile, WMTA validation shows significant association between WMTA score and self-reported pain or discomfort for the back, shoulder&arms and knee&legs with p<0.05. This tool is expected to provide new workplace ergonomic observational tool to assess WMSDs for the next stage of the case study.

Keywords: assessment, biomechanics, musculoskeletal disorders, observational tools

Procedia PDF Downloads 449
1648 Evolutionary Advantages of Loneliness with an Agent-Based Model

Authors: David Gottlieb, Jason Yoder

Abstract:

The feeling of loneliness is not uncommon in modern society, and yet, there is a fundamental lack of understanding in its origins and purpose in nature. One interpretation of loneliness is that it is a subjective experience that punishes a lack of social behavior, and thus its emergence in human evolution is seemingly tied to the survival of early human tribes. Still, a common counterintuitive response to loneliness is a state of hypervigilance, resulting in social withdrawal, which may appear maladaptive to modern society. So far, no computational model of loneliness’ effect during evolution yet exists; however, agent-based models (ABM) can be used to investigate social behavior, and applying evolution to agents’ behaviors can demonstrate selective advantages for particular behaviors. We propose an ABM where each agent contains four social behaviors, and one goal-seeking behavior, letting evolution select the best behavioral patterns for resource allocation. In our paper, we use an algorithm similar to the boid model to guide the behavior of agents, but expand the set of rules that govern their behavior. While we use cohesion, separation, and alignment for simple social movement, our expanded model adds goal-oriented behavior, which is inspired by particle swarm optimization, such that agents move relative to their personal best position. Since agents are given the ability to form connections by interacting with each other, our final behavior guides agent movement toward its social connections. Finally, we introduce a mechanism to represent a state of loneliness, which engages when an agent's perceived social involvement does not meet its expected social involvement. This enables us to investigate a minimal model of loneliness, and using evolution we attempt to elucidate its value in human survival. Agents are placed in an environment in which they must acquire resources, as their fitness is based on the total resource collected. With these rules in place, we are able to run evolution under various conditions, including resource-rich environments, and when disease is present. Our simulations indicate that there is strong selection pressure for social behavior under circumstances where there is a clear discrepancy between initial resource locations, and against social behavior when disease is present, mirroring hypervigilance. This not only provides an explanation for the emergence of loneliness, but also reflects the diversity of response to loneliness in the real world. In addition, there is evidence of a richness of social behavior when loneliness was present. By introducing just two resource locations, we observed a divergence in social motivation after agents became lonely, where one agent learned to move to the other, who was in a better resource position. The results and ongoing work from this project show that it is possible to glean insight into the evolutionary advantages of even simple mechanisms of loneliness. The model we developed has produced unexpected results and has led to more questions, such as the impact loneliness would have at a larger scale, or the effect of creating a set of rules governing interaction beyond adjacency.

Keywords: agent-based, behavior, evolution, loneliness, social

Procedia PDF Downloads 74
1647 Predicting Intention and Readiness to Alcohol Consumption Reduction and Cessation among Thai Teenagers Using Scales Based on the Theory of Planned Behavior

Authors: Rewadee Watakakosol, Arunya Tuicomepee, Panrapee Suttiwan, Sakkaphat T. Ngamake

Abstract:

Health problems caused by alcohol consumption not only have short-term effects at the time of drinking but also leave long-lasting health conditions. Teenagers who start drinking in their middle-high or high school years or before entering college have higher likelihood to increase their alcohol use and abuse, and they were found to be less healthy compared with their non-drinking peers when entering adulthood. This study aimed to examine factors that predict intention and readiness to reduce and quit alcohol consumption among Thai teenagers. Participants were 826 high-school and vocational school students, most of whom were females (64.4%) with the average age of 16.4 (SD = 0.9) and the average age of first drinking at 13.7 (SD = 2.2). Instruments included the scales that developed based on the Theory of Planned Behaviour theoretical framework. They were the Attitude toward Alcohol Reduction and Cessation Scale, Normative Group and Influence Scale, Perceived Behavioral Control toward Alcohol Reduction and Cessation Scale, Behavioral Intent toward Alcohol Reduction and Cessation Scale, and Readiness to Reduce and Quit Alcohol Consumption Scale. Findings revealed that readiness to reduce / quit alcohol was the most powerful predictive factor (β=. 53, p < .01), followed by attitude of easiness in alcohol reduction and cessation (β=.46, p < .01), perceived behavioral control toward alcohol reduction and cessation (β =.41, p < .01), normative group and influence (β=.15, p < .01), and attitude of being accepted from alcohol reduction and cessation (β = -.12, p < .01), respectively. Attitude of improved health after alcohol reduction and cessation did not show statistically significantly predictive power. All factors significantly predict teenagers’ alcohol reduction and cessation behavior and accounted for 59 percent of total variance of alcohol consumption reduction and cessation.

Keywords: alcohol consumption reduction and cessation, intention, readiness to change, Thai teenagers

Procedia PDF Downloads 311
1646 The Effects of Water Fraction and Salinity on Crude Oil-Water Dispersions

Authors: Ramin Dabirian, Yi Zhang, Ilias Gavrielatos, Ram Mohan, Ovadia Shoham

Abstract:

Oil-water emulsions can be found in almost every part of the petroleum industry, namely in reservoir rocks, drilling cuttings circulation, production in wells, transportation pipelines, surface facilities and refining process. However, it is necessary for oil production and refinery engineers to resolve the petroleum emulsion problems as well as to eliminate the contaminants in order to meet environmental standards, achieve the desired product quality and to improve equipment reliability and efficiency. A state-of-art Dispersion Characterization Rig (DCR) has been utilized to investigate crude oil-distilled water dispersion separation. Over 80 experimental tests were ran to investigate the flow behavior and stability of the dispersions. The experimental conditions include the effects of water cuts (25%, 50% and 75%), NaCl concentrations (0, 3.5% and 18%), mixture flow velocities (0.89 and 1.71 ft/s), and also orifice place types on the separation rate. The experimental data demonstrate that the water cut can significantly affects the separation time and efficiency. The dispersion with lower water cut takes longer time to separate and have low separation efficiency. The medium and lower water cuts will result in the formation of Mousse emulsion and the phase inversion happens around the medium water cut. The data also confirm that increasing the NaCl concentration in aqueous phase can increase the crude oil water dispersion separation efficiency especially at higher salinities. The separation profile for dispersions with lower salt concentrations has a lower sedimentation rate slope before the inflection point. Dispersions in all tests with higher salt concentrations have a larger sedimenting rate. The presence of NaCl can influence the interfacial tension gradients along the interface and it plays a role in avoiding the Mousse emulsion formation.

Keywords: oil-water dispersion, separation mechanism, phase inversion, emulsion formation

Procedia PDF Downloads 163
1645 Lineament Analysis as a Method of Mineral Deposit Exploration

Authors: Dmitry Kukushkin

Abstract:

Lineaments form complex grids on Earth's surface. Currently, one particular object of study for many researchers is the analysis and geological interpretation of maps of lineament density in an attempt to locate various geological structures. But lineament grids are made up of global, regional and local components, and this superimposition of lineament grids of various scales (global, regional, and local) renders this method less effective. Besides, the erosion processes and the erosional resistance of rocks lying on the surface play a significant role in the formation of lineament grids. Therefore, specific lineament density map is characterized by poor contrast (most anomalies do not exceed the average values by more than 30%) and unstable relation with local geological structures. Our method allows to confidently determine the location and boundaries of local geological structures that are likely to contain mineral deposits. Maps of the fields of lineament distortion (residual specific density) created by our method are characterized by high contrast with anomalies exceeding the average by upward of 200%, and stable correlation to local geological structures containing mineral deposits. Our method considers a lineament grid as a general lineaments field – surface manifestation of stress and strain fields of Earth associated with geological structures of global, regional and local scales. Each of these structures has its own field of brittle dislocations that appears on the surface of its lineament field. Our method allows singling out local components by suppressing global and regional components of the general lineaments field. The remaining local lineament field is an indicator of local geological structures.The following are some of the examples of the method application: 1. Srednevilyuiskoye gas condensate field (Yakutia) - a direct proof of the effectiveness of methodology; 2. Structure of Astronomy (Taimyr) - confirmed by the seismic survey; 3. Active gold mine of Kadara (Chita Region) – confirmed by geochemistry; 4. Active gold mine of Davenda (Yakutia) - determined the boundaries of the granite massif that controls mineralization; 5. Object, promising to search for hydrocarbons in the north of Algeria - correlated with the results of geological, geochemical and geophysical surveys. For both Kadara and Davenda, the method demonstrated that the intensive anomalies of the local lineament fields are consistent with the geochemical anomalies and indicate the presence of the gold content at commercial levels. Our method of suppression of global and regional components results in isolating a local lineament field. In early stages of a geological exploration for oil and gas, this allows determining boundaries of various geological structures with very high reliability. Therefore, our method allows optimization of placement of seismic profile and exploratory drilling equipment, and this leads to a reduction of costs of prospecting and exploration of deposits, as well as acceleration of its commissioning.

Keywords: lineaments, mineral exploration, oil and gas, remote sensing

Procedia PDF Downloads 268
1644 Optimisation of Dyes Decolourisation by Bacillus aryabhattai

Authors: A. Paz, S. Cortés Diéguez, J. M. Cruz, A. B. Moldes, J. M. Domínguez

Abstract:

Synthetic dyes are extensively used in the paper, food, leather, cosmetics, pharmaceutical and textile industries. Wastewater resulting from their production means several environmental problems. Improper disposal of theirs effluents involves adverse impacts and not only about the colour, also on water quality (Total Organic Carbon, Biological Oxygen Demand, Chemical Oxygen Demand, suspended solids, salinity, etc.) on flora (inhibition of photosynthetic activity), fauna (toxic, carcinogenic, and mutagenic effects) and human health. The aim of this work is to optimize the decolourisation process of different types of dyes by Bacillus aryabhattai. Initially, different types of dyes (Indigo Carmine, Coomassie Brilliant Blue and Remazol Brilliant Blue R) and suitable culture media (Nutritive Broth, Luria Bertani Broth and Trypticasein Soy Broth) were selected. Then, a central composite design (CCD) was employed to optimise and analyse the significance of each abiotic parameter. Three process variables (temperature, salt concentration and agitation) were investigated in the CCD at 3 levels with 2-star points. A total of 23 experiments were carried out according to a full factorial design, consisting of 8 factorial experiments (coded to the usual ± 1 notation), 6 axial experiments (on the axis at a distance of ± α from the centre), and 9 replicates (at the centre of the experimental domain). Experiments results suggest the efficiency of this strain to remove the tested dyes on the 3 media studied, although Trypticasein Soy Broth (TSB) was the most suitable medium. Indigo Carmine and Coomassie Brilliant Blue at maximal tested concentration 150 mg/l were completely decolourised, meanwhile, an acceptable removal was observed using the more complicate dye Remazol Brilliant Blue R at a concentration of 50 mg/l.

Keywords: Bacillus aryabhattai, dyes, decolourisation, central composite design

Procedia PDF Downloads 201
1643 Climate Variability and Its Impacts on Rice (Oryza sativa) Productivity in Dass Local Government Area of Bauchi State, Nigeria

Authors: Auwal Garba, Rabiu Maijama’a, Abdullahi Muhammad Jalam

Abstract:

Variability in climate has affected the agricultural production all over the globe. This concern has motivated important changes in the field of research during the last decade. Climate variability is believed to have declining effects towards rice production in Nigeria. This study examined climate variability and its impact on rice productivity in Dass Local Government Area, Bauchi State, by employing Linear Trend Model (LTM), analysis of variance (ANOVA) and regression analysis. Annual seasonal data of the climatic variables for temperature (min. and max), rainfall, and solar radiation from 1990 to 2015 were used. Results confirmed that 74.4% of the total variation in rice yield in the study area was explained by the changes in the independent variables. That is to say, temperature (minimum and maximum), rainfall, and solar radiation explained rice yield with 74.4% in the study area. Rising mean maximum temperature would lead to reduction in rice production while moderate increase in mean minimum temperature would be advantageous towards rice production, and the persistent rise in the mean maximum temperature, in the long run, will have more negatively affect rice production in the future. It is, therefore, important to promote agro-meteorological advisory services, which will be useful in farm planning and yield sustainability. Closer collaboration among the meteorologist and agricultural scientist is needed to increase the awareness about the existing database, crop weather models among others, with a view to reaping the full benefits of research on specific problems and sustainable yield management and also there should be a special initiative by the ADPs (State Agricultural Development Programme) towards promoting best agricultural practices that are resilient to climate variability in rice production and yield sustainability.

Keywords: climate variability, impact, productivity, rice

Procedia PDF Downloads 80
1642 Nitrogen Fixation in Hare Gastrointestinal Tract

Authors: Tatiana A. Kuznetsova, Maxim V. Vechersky, Natalia V. Kostina, Marat M. Umarov, Elena I. Naumova

Abstract:

One of the main problems of nutrition of phytophagous animals is the insufficiency of protein in their forage. Usually, symbiotic microorganisms highly contribute both to carbohydrates and nitrogen compounds of the food. But it is not easy to utilize microbial biomass in the large intestine and caecum for the animals with hindgut fermentation. So that, some animals, as well hares, developed special mechanism of contribution of such biomass - obligate autocoprophagy, or reingestion. Hares have two types of feces - the hard and the soft. Hard feces are excreted at night, while hares are vigilance ("foraging period"), and the soft ones (caecotrophs) are produced and reingested in the day-time during hares "resting-period". We examine the role of microbial digestion in providing nitrogen nutrition of hare (Lepus europaeus). We determine the ability of nitrogen fixation in fornix and stomach body, small intestine, caecum and colon of hares' gastro-intestinal tract in two main period of hares activity - "resting-period" (day time) and "foraging period" (late-evening and very-early-morning). We use gas chromatography to measure levels of nitrogen fixing activity (acetylene reduction). Nitrogen fixing activity was detected in the contents of all analyzed parts of the gastrointestinal tract. Maximum values were recorded in the large intestine. Also daily dynamics of the process was detected. Thus, during hare “resting-period” (caecotrophs formation) N2-fixing activity was significantly higher than during “foraging period”, reaching 0,3 nmol C2H4/g*h. N2-fixing activity in the gastrointestinal tract can allocate to significant contribution of nitrogen fixers to microbial digestion in hare and confirms the importance of coprophagy as a nitrogen source in lagomorphs.

Keywords: coprophagy, gastrointestinal tract, lagomorphs, nitrogen fixation

Procedia PDF Downloads 335
1641 Discussion on the Impact and Improvement Strategy of Bike Sharing on Urban Space

Authors: Bingying Liu, Dandong Ge, Xinlan Zhang, Haoyang Liang

Abstract:

Over the past two years, a new generation of No-Pile Bike sharing, represented by the Ofo, Mobike and HelloBike, has sprung up in various cities in China, and spread rapidly in countries such as Britain, Japan, the United States and Singapore. As a new green public transportation mode, bike sharing can bring a series of benefits to urban space. At first, this paper analyzes the specific impact of bike sharing on urban space in China. Based on the market research and data analyzing, it is found that bike sharing can improve the quality of urban space in three aspects: expanding the radius of public transportation service, filling service blind spots, alleviating urban traffic congestion, and enhancing the vitality of urban space. On the other hand, due to the immature market and the imperfect system, bike sharing has gradually revealed some difficulties, such as parking chaos, malicious damage, safety problems, imbalance between supply and demand, and so on. Then the paper investigates the characteristics of shared bikes, business model, operating mechanism on Chinese market currently. Finally, in order to make bike sharing serve urban construction better, this paper puts forward some specific countermeasures from four aspects. In terms of market operations, it is necessary to establish a public-private partnership model and set up a unified bike-sharing integrated management platform. From technical methods level, the paper proposes to develop an intelligent parking system for regulating parking. From policy formulation level, establishing a bike-sharing assessment mechanism would strengthen supervision. As to urban planning, sharing data and redesigning slow roadway is beneficial for transportation and spatial planning.

Keywords: bike sharing, impact analysis, improvement strategy, urban space

Procedia PDF Downloads 146