Search results for: fuzzy multi-objective linear programming problems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10199

Search results for: fuzzy multi-objective linear programming problems

1739 Anti-Oxidant and Anti-Cancer Activity of Helix aspersa Aqueous Extract

Authors: Ibtissem El Ouar, Cornelia Braicu, Dalila Naimi, Alexendru Irimie, Ioana Berindan-Neagoe

Abstract:

Helix aspersa, 'the garden snail' is a big land snail widely found in the Mediterranean countries, it is one of the most consumed species in the west of Algeria. It is commonly used in zootherapy to purify blood and to treat cardiovascular diseases and liver problems. The aim of our study is to investigate, the antitumor activity of an aqueous extract from Helix aspersa prepared by the traditional method on Hs578T; a triple negative breast cancer cell line. Firstly, the free radical scavenging activity of H. aspersa extract was assessed by measuring its capability for scavenging the radical 2,2-diphenyl-1-picrylhydrazyl (DPPH), as well as its ability to reduce ferric ion by the FRAP assay (ferric reducing ability). The cytotoxic effect of H. aspersa extract against Hs578T cells was evaluated by the MTT test (3-(4,5- dimethylthiazl-2-yl)-2,5- diphenyltetrazolium bromide)) while the mode of cell death induced by the extract has been determined by fluorescence microscopy using acredine orange/ethidium bromide (AO/EB) probe. The level of TNFα has also measured in cell medium by ELISA method. The results suggest that H. aspersa extract has an antioxidant activity, especially at high concentrations, it can reduce DPPH radical and ferric ion. The MTT test shows that H. aspersa extract has a great cytotoxic effect against breast cancer cells, the IC50 value correspond of the dilution 1% of the crude extract. Moreover, the AO/EB staining shows that TNFα induced necrosis is the main form of cell death induced by the extract. In conclusion, the present study may open new perspectives in the search for new natural anticancer drugs.

Keywords: breast cancer, Helix aspersa, Hs578t cell line, necrosis

Procedia PDF Downloads 394
1738 Class Control Management Issues and Solutions in Interactive Learning Theories’ Efficiency and the Application Case Study: 3rd Year Primary School

Authors: Mohammed Belalia Douma

Abstract:

Interactive learning is considered as the most effective strategy of learning, it is an educational philosophy based on the learner's contribution and involvement mainly in classroom and how he interacts toward his small society “classroom”, and the level of his collaboration into challenge, discovering, games, participation, all these can be provided through the interactive learning, which aims to activate the learner's role in the operation of learning, which focuses on research and experimentation, and the learner's self-reliance in obtaining information, acquiring skills, and forming values and attitudes. Whereas not based on memorization only, but rather on developing thinking and the ability to solve problems, on teamwork and collaborative learning. With the exchange or roles - teacher to student- , when the student will be more active and performing operations more than the student under the interactive learning method; we might face a several issues dealing with class controlling management, noise, and stability of learning… etc. This research paper is observing the application of the interactive learning on reality “classroom” and answers several assumptions and analyzes the issues coming up of these strategies mainly: noise, class control…etc The research sample was about 150 student of the 3rd year primary school in “Chlef” district, Algeria, level: beginners in the range of age 08 to 10 years old . We provided a questionnaire of confidential fifteen questions and also analyzing the attitudes of learners during three months. it have witnessed as teachers a variety of strategies dealing with applying the interactive learning but with a different issues; time management, noise, uncontrolled classes, overcrowded classes. Finally, it summed up that although the active education is an inevitably effective method of teaching, however, there are drawbacks to this, in addition to the fact that not all theoretical strategies can be applied and we conclude with solutions of this case study.

Keywords: interactive learning, student, learners, strategies.

Procedia PDF Downloads 27
1737 Development of Electric Generator and Water Purifier Cart

Authors: Luisito L. Lacatan, Gian Carlo J. Bergonia, Felipe C. Buado III, Gerald L. Gono, Ron Mark V. Ortil, Calvin A. Yap

Abstract:

This paper features the development of a Mobile Self-sustaining Electricity Generator for water distillation process with MCU- based wireless controller & indicator designed to solve the problem of scarcity of clean water. It is a fact that pure water is precious nowadays and its value is more precious to those who do not have or enjoy it. There are many water filtration products in existence today. However, none of these products fully satisfies the needs of families needing clean drinking water. All of the following products require either large sums of money or extensive maintenance, and some products do not even come with a guarantee of potable water. The proposed project was designed to alleviate the problem of scarcity of potable water in the country and part of the purpose was also to identify the problem or loopholes of the project such as the distance and speed required to produce electricity using a wheel and alternator, the required time for the heating element to heat up, the capacity of the battery to maintain the heat of the heating element and the time required for the boiler to produce a clean and potable water. The project has three parts. The first part included the researchers’ effort to plan every part of the project from the conversion of mechanical energy to electrical energy, from purifying water to potable drinking water to the controller and indicator of the project using microcontroller unit (MCU). This included identifying the problem encountered and any possible solution to prevent and avoid errors. Gathering and reviewing related studies about the project helped the researcher reduce and prevent any problems before they could be encountered. It also included the price and quantity of materials used to control the budget.

Keywords: mobile, self – sustaining, electricity generator, water distillation, wireless battery indicator, wireless water level indicator

Procedia PDF Downloads 285
1736 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 311
1735 Identifying Strategies and Techniques for the Egyptian Medium and Large Size Contractors to Respond to Economic Hardship

Authors: Michael Salib, Samer Ezeldin, Ahmed Waly

Abstract:

There are numerous challenges and problems facing the construction industry in several countries in the Middle East, as a result of numerous economic and political effects. As an example in Egypt, several construction companies have shut down and left the market since 2016. The closure of these companies occurred, as they did not respond with the suitable techniques and strategies that will enable them to survive during this economic turmoil period. A research is conducted in order to identify adequate strategies to be implemented by the Egyptian contractors that could allow them survive and keep competing during such economic hardship period. Two different techniques were used in order to identify these startegies. First, a deep research were conducted on the companies located in countries that suffered similar economic harship to identify the strategies they used in order to survive. Second, interviews were conducted with experts in the construction field in order to list the effective strategies they used that allowed them to survive. Moreover, at the end of each interview, the experts were asked to rate the applicability of the previously identified strategies used in the foreign countries, then the efficiency of each strategy if used in Egypt. A framework model is developed in order to assist the construction companies in choosing the suitable techniques to their company size, through identifying the top ranked strategies and techniques that should be adopted by the company based on the parameters given to the model. In order to verify this framework, the financial statements of two leading companies in the Egyptian construction market were studied. The first Contractor has applied nearly all the top ranked strategies identified in this paper, while the other contractor has applied only few of the identified top ranked strategies. Finally, another expert interviews were conducted in order to validate the framework. These experts were asked to test the model and rate through a questionnaire its applicability and effectiveness.

Keywords: construction management, economic hardship, recession, survive

Procedia PDF Downloads 107
1734 Determination of Authorship of the Works Created by the Artificial Intelligence

Authors: Vladimir Sharapaev

Abstract:

This paper seeks to address the question of the authorship of copyrighted works created solely by the artificial intelligence or with the use thereof, and proposes possible interpretational or legislative solutions to the problems arising from the plurality of the persons potentially involved in the ultimate creation of the work and division of tasks among such persons. Being based on the commonly accepted assumption that a copyrighted work can only be created by a natural person, the paper does not deal with the issues regarding the creativity of the artificial intelligence per se (or the lack thereof), and instead focuses on the distribution of the intellectual property rights potentially belonging to the creators of the artificial intelligence and/or the creators of the content used for the formation of the copyrighted work. Moreover, the technical development and rapid improvement of the AI-based programmes, which tend to be reaching even greater independence on a human being, give rise to the question whether the initial creators of the artificial intelligence can be entitled to the intellectual property rights to the works created by such AI at all. As the juridical practice of some European courts and legal doctrine tends to incline to the latter opinion, indicating that the works created by the AI may not at all enjoy copyright protection, the questions of authorships appear to be causing great concerns among the investors in the development of the relevant technology. Although the technology companies dispose with further instruments of protection of their investments, the risk of the works in question not being copyrighted caused by the inconsistency of the case law and a certain research gap constitutes a highly important issue. In order to assess the possible interpretations, the author adopted a doctrinal and analytical approach to the research, systematically analysing the European and Czech copyright laws and case law in some EU jurisdictions. This study aims to contribute to greater legal certainty regarding the issues of the authorship of the AI-created works and define possible clues for further research.

Keywords: artificial intelligence, copyright, authorship, copyrighted work, intellectual property

Procedia PDF Downloads 100
1733 Ground Track Assessment Using Electrical Resistivity Tomography Application

Authors: Noryani Natasha Yahaya, Anas Ibrahim, Juraidah Ahmad, Azura Ahmad, Mohd Ikmal Fazlan Rosli, Zailan Ramli, Muhd Sidek Muhd Norhasri

Abstract:

The subgrade formation is an important element of the railway structure which holds overall track stability. Conventional track maintenance involves many substructure component replacements, as well as track re-ballasting on a regular basis is partially contributed to the embankment's long-term settlement problem. For subgrade long-term stability analysis, the geophysical method is commonly being used to diagnose those hidden sources/mechanisms of track deterioration problems that the normal visual method is unable to detect. Electrical resistivity tomography (ERT) is one of the applicable geophysical tools that are helpful in railway subgrade inspection/track monitoring due to its flexibility and reliability of the analysis. The ERT was conducted at KM 23.0 of Pinang Tunggal track to investigate the subgrade of railway track through the characterization/mapping on track formation profiling which was directly generated using 2D analysis of Res2dinv software. The profiles will allow examination of the presence and spatial extent of a significant subgrade layer and screening of any poor contact of soil boundary. Based on the finding, there is a mix/interpretation/intermixing of an interlayer between the sub-ballast and the sand. Although the embankment track considered here is at no immediate risk of settlement effect or any failure, the regular monitoring of track’s location will allow early correction maintenance if necessary. The developed data of track formation clearly shows the similarity of the side view with the assessed track. The data visualization in the 2D section of the track embankment agreed well with the initial assumption based on the main element structure general side view.

Keywords: ground track, assessment, resistivity, geophysical railway, method

Procedia PDF Downloads 123
1732 Using Short Learning Programmes to Develop Students’ Digital Literacies in Art and Design Education

Authors: B.J. Khoza, B. Kembo

Abstract:

Global socioeconomic developments and ever-growing technological advancements of the art and design industry indicate the pivotal importance of lifelong learning. There exists a discrepancy between competencies, personal ambition, and workplace requirements. There are few , if at all, institutions of higher learning in South Africa which offer Short Learning Programmes (SLP) in Art and Design Education. Traditionally, Art and Design education is delivered face to face via a hands-on approach. In this way the enduring perception among educators is that art and design education does not lend itself to online delivery. Short Learning programmes (SLP) are a concentrated approach to make revenue and lure potential prospective students to embark on further education study, this is often of weighted value to both students and employers. SLPs are used by Higher Education institutions to generate income in support of the core academic programmes. However, there is a gap in terms of the translation of art and design studio pedagogy into SLPs which provide quality education, are adaptable and delivered via a blended mode. In our paper, we propose a conceptual framework drawing on secondary research to analyse existing research to SLPs for arts and design education. We aim to indicate a new dimension to the process of using a design-based research approach for short learning programmes in art and design education. The study draws on a conceptual framework, a qualitative analysis through the lenses of Herrington, McKenney, Reeves and Oliver (2005) principles of the design-based research approach. The results of this study indicate that design-based research is not only an effective methodological approach for developing and deploying arts and design education curriculum for 1st years in Higher Education context but it also has the potential to guide future research. The findings of this study propose that the design-based research approach could bring theory and praxis together regarding a common purpose to design context-based solutions to educational problems.

Keywords: design education, design-based research, digital literacies, multi-literacies, short learning programme

Procedia PDF Downloads 132
1731 Students' Online Evaluation: Impact on the Polytechnic University of the Philippines Faculty's Performance

Authors: Silvia C. Ambag, Racidon P. Bernarte, Jacquelyn B. Buccahi, Jessica R. Lacaron, Charlyn L. Mangulabnan

Abstract:

This study aimed to answer the query, “What is the impact of Students Online Evaluation on PUP Faculty’s Performance?” The problem of the study was resolve through the objective of knowing the perceived impact of students’ online evaluation on PUP faculty’s performance. The objectives were carried through the application of quantitative research design and by conducting survey research method. The researchers utilized primary and secondary data. Primary data was gathered from the self-administered survey and secondary data was collected from the books, articles on both print-out and online materials and also other theses related study. Findings revealed that PUP faculty in general stated that students’ online evaluation made a highly positive impact on their performance based on their ‘Knowledge of Subject’ and ‘Teaching for Independent Learning’, giving a highest mean of 3.62 and 3.60 respectively., followed by the faculty’s performance which gained an overall means of 3.55 and 3.53 are based on their ‘Commitment’ and ‘Management of Learning’. From the findings, the researchers concluded that Students’ online evaluation made a ‘Highly Positive’ impact on PUP faculty’s performance based on all Four (4) areas. Furthermore, the study’s findings reveal that PUP faculty encountered many problems regarding the students’ online evaluation; the impact of the Students’ Online Evaluation is significant when it comes to the employment status of the faculty; and most of the PUP faculty recommends reviewing the PUP Online Survey for Faculty Evaluation for improvement. Hence, the researchers recommend the PUP Administration to revisit and revise the PUP Online Survey for Faculty Evaluation, specifically review the questions and make a set of questions that will be appropriate to the discipline or field of the faculty. Also, the administration should fully orient the students about the importance, purpose and impact of online faculty evaluation. And lastly, the researchers suggest the PUP Faculty to continue their positive performance and continue on being cooperative with the administrations’ purpose of addressing the students’ concerns and for the students, the researchers urged them to take the online faculty evaluation honestly and objectively.

Keywords: on-line Evaluation, faculty, performance, Polytechnic University of the Philippines (PUP)

Procedia PDF Downloads 378
1730 Assessing Diagnostic and Evaluation Tools for Use in Urban Immunisation Programming: A Critical Narrative Review and Proposed Framework

Authors: Tim Crocker-Buque, Sandra Mounier-Jack, Natasha Howard

Abstract:

Background: Due to both the increasing scale and speed of urbanisation, urban areas in low and middle-income countries (LMICs) host increasingly large populations of under-immunized children, with the additional associated risks of rapid disease transmission in high-density living environments. Multiple interdependent factors are associated with these coverage disparities in urban areas and most evidence comes from relatively few countries, e.g., predominantly India, Kenya, Nigeria, and some from Pakistan, Iran, and Brazil. This study aimed to identify, describe, and assess the main tools used to measure or improve coverage of immunisation services in poor urban areas. Methods: Authors used a qualitative review design, including academic and non-academic literature, to identify tools used to improve coverage of public health interventions in urban areas. Authors selected and extracted sources that provided good examples of specific tools, or categories of tools, used in a context relevant to urban immunization. Diagnostic (e.g., for data collection, analysis, and insight generation) and programme tools (e.g., for investigating or improving ongoing programmes) and interventions (e.g., multi-component or stand-alone with evidence) were selected for inclusion to provide a range of type and availability of relevant tools. These were then prioritised using a decision-analysis framework and a tool selection guide for programme managers developed. Results: Authors reviewed tools used in urban immunisation contexts and tools designed for (i) non-immunization and/or non-health interventions in urban areas, and (ii) immunisation in rural contexts that had relevance for urban areas (e.g., Reaching every District/Child/ Zone). Many approaches combined several tools and methods, which authors categorised as diagnostic, programme, and intervention. The most common diagnostic tools were cross-sectional surveys, key informant interviews, focus group discussions, secondary analysis of routine data, and geographical mapping of outcomes, resources, and services. Programme tools involved multiple stages of data collection, analysis, insight generation, and intervention planning and included guidance documents from WHO (World Health Organisation), UNICEF (United Nations Children's Fund), USAID (United States Agency for International Development), and governments, and articles reporting on diagnostics, interventions, and/or evaluations to improve urban immunisation. Interventions involved service improvement, education, reminder/recall, incentives, outreach, mass-media, or were multi-component. The main gaps in existing tools were an assessment of macro/policy-level factors, exploration of effective immunization communication channels, and measuring in/out-migration. The proposed framework uses a problem tree approach to suggest tools to address five common challenges (i.e. identifying populations, understanding communities, issues with service access and use, improving services, improving coverage) based on context and available data. Conclusion: This study identified many tools relevant to evaluating urban LMIC immunisation programmes, including significant crossover between tools. This was encouraging in terms of supporting the identification of common areas, but problematic as data volumes, instructions, and activities could overwhelm managers and tools are not always suitably applied to suitable contexts. Further research is needed on how best to combine tools and methods to suit local contexts. Authors’ initial framework can be tested and developed further.

Keywords: health equity, immunisation, low and middle-income countries, poverty, urban health

Procedia PDF Downloads 117
1729 Upward Spread Forced Smoldering Phenomenon: Effects and Applications

Authors: Akshita Swaminathan, Vinayak Malhotra

Abstract:

Smoldering is one of the most persistent types of combustion which can take place for very long periods (hours, days, months) if there is an abundance of fuel. It causes quite a notable number of accidents and is one of the prime suspects for fire and safety hazards. It can be ignited with weaker ignition and is more difficult to suppress than flaming combustion. Upward spread smoldering is the case in which the air flow is parallel to the direction of the smoldering front. This type of smoldering is quite uncontrollable, and hence, there is a need to study this phenomenon. As compared to flaming combustion, a smoldering phenomenon often goes unrecognised and hence is a cause for various fire accidents. A simplified experimental setup was raised to study the upward spread smoldering, its effects due to varying forced flow and its effects when it takes place in the presence of external heat sources and alternative energy sources such as acoustic energy. Linear configurations were studied depending on varying forced flow effects on upward spread smoldering. Effect of varying forced flow on upward spread smoldering was observed and studied: (i) in the presence of external heat source (ii) in the presence of external alternative energy sources (acoustic energy). The role of ash removal was observed and studied. Results indicate that upward spread forced smoldering was affected by various key controlling parameters such as the speed of the forced flow, surface orientation, interspace distance (distance between forced flow and the pilot fuel). When an external heat source was placed on either side of the pilot fuel, it was observed that the smoldering phenomenon was affected. The surface orientation and interspace distance between the external heat sources and the pilot fuel were found to play a huge role in altering the regression rate. Lastly, by impinging an alternative energy source in the form of acoustic energy on the smoldering front, it was observed that varying frequencies affected the smoldering phenomenon in different ways. The surface orientation also played an important role. This project highlights the importance of fire and safety hazard and means of better combustion for all kinds of scientific research and practical applications. The knowledge acquired from this work can be applied to various engineering systems ranging from aircrafts, spacecrafts and even to buildings fires, wildfires and help us in better understanding and hence avoiding such widespread fires. Various fire disasters have been recorded in aircrafts due to small electric short circuits which led to smoldering fires. These eventually caused the engine to catch fire that cost damage to life and property. Studying this phenomenon can help us to control, if not prevent, such disasters.

Keywords: alternative energy sources, flaming combustion, ignition, regression rate, smoldering

Procedia PDF Downloads 107
1728 Raman Spectroscopy Analysis of MnTiO₃-TiO₂ Eutectic

Authors: Adrian Niewiadomski, Barbara Surma, Katarzyna Kolodziejak, Dorota A. Pawlak

Abstract:

Oxide-oxide eutectic is attracting increasing interest of scientific community because of their unique properties and numerous potential applications. Some of the most interesting examples of applications are metamaterials, glucose sensors, photoactive materials, thermoelectric materials, and photocatalysts. Their unique properties result from the fact that composite materials consist of two or more phases. As a result, these materials have additive and product properties. Additive properties originate from particular phases while product properties originate from the interaction between phases. MnTiO3-TiO2 eutectic is one of such materials. TiO2 is a well-known semiconductor, and it is used as a photocatalyst. Moreover, it may be used to produce solar cells, in a gas sensing devices and in electrochemistry. MnTiO3 is a semiconductor and antiferromagnetic. Therefore it has potential application in integrated circuits devices, and as a gas and humidity sensor, in non-linear optics and as a visible-light activated photocatalyst. The above facts indicate that eutectic MnTiO3-TiO2 constitutes an extremely promising material that should be studied. Despite that Raman spectroscopy is a powerful method to characterize materials, to our knowledge Raman studies of eutectics are very limited, and there are no studies of the MnTiO3-TiO2 eutectic. While to our knowledge the papers regarding this material are scarce. The MnTiO3-TiO2 eutectic, as well as TiO2 and MnTiO3 single crystals, were grown by the micro-pulling-down method at the Institute of Electronic Materials Technology in Warsaw, Poland. A nitrogen atmosphere was maintained during whole crystal growth process. The as-grown samples of MnTiO3-TiO2 eutectic, as well as TiO2 and MnTiO3 single crystals, are black and opaque. Samples were cut perpendicular to the growth direction. Cross sections were examined with scanning electron microscopy (SEM) and with Raman spectroscopy. The present studies showed that maintaining nitrogen atmosphere during crystal growth process may result in obtaining black TiO2 crystals. SEM and Raman experiments showed that studied eutectic consists of three distinct regions. Furthermore, two of these regions correspond with MnTiO3, while the third region corresponds with the TiO2-xNx phase. Raman studies pointed out that TiO2-xNx phase crystallizes in rutile structure. The studies show that Raman experiments may be successfully used to characterize eutectic materials. The MnTiO3-TiO2 eutectic was grown by the micro-pulling-down method. SEM and micro-Raman experiments were used to establish phase composition of studied eutectic. The studies revealed that the TiO2 phase had been doped with nitrogen. Therefore the TiO2 phase is, in fact, a solid solution with TiO2-xNx composition. The remaining two phases exhibit Raman lines of both rutile TiO2 and MnTiO3. This points out to some kind of coexistence of these phases in studied eutectic.

Keywords: compound materials, eutectic growth and characterization, Raman spectroscopy, rutile TiO₂

Procedia PDF Downloads 170
1727 Developing a Cloud Intelligence-Based Energy Management Architecture Facilitated with Embedded Edge Analytics for Energy Conservation in Demand-Side Management

Authors: Yu-Hsiu Lin, Wen-Chun Lin, Yen-Chang Cheng, Chia-Ju Yeh, Yu-Chuan Chen, Tai-You Li

Abstract:

Demand-Side Management (DSM) has the potential to reduce electricity costs and carbon emission, which are associated with electricity used in the modern society. A home Energy Management System (EMS) commonly used by residential consumers in a down-stream sector of a smart grid to monitor, control, and optimize energy efficiency to domestic appliances is a system of computer-aided functionalities as an energy audit for residential DSM. Implementing fault detection and classification to domestic appliances monitored, controlled, and optimized is one of the most important steps to realize preventive maintenance, such as residential air conditioning and heating preventative maintenance in residential/industrial DSM. In this study, a cloud intelligence-based green EMS that comes up with an Internet of Things (IoT) technology stack for residential DSM is developed. In the EMS, Arduino MEGA Ethernet communication-based smart sockets that module a Real Time Clock chip to keep track of current time as timestamps via Network Time Protocol are designed and implemented for readings of load phenomena reflecting on voltage and current signals sensed. Also, a Network-Attached Storage providing data access to a heterogeneous group of IoT clients via Hypertext Transfer Protocol (HTTP) methods is configured to data stores of parsed sensor readings. Lastly, a desktop computer with a WAMP software bundle (the Microsoft® Windows operating system, Apache HTTP Server, MySQL relational database management system, and PHP programming language) serves as a data science analytics engine for dynamic Web APP/REpresentational State Transfer-ful web service of the residential DSM having globally-Advanced Internet of Artificial Intelligence (AI)/Computational Intelligence. Where, an abstract computing machine, Java Virtual Machine, enables the desktop computer to run Java programs, and a mash-up of Java, R language, and Python is well-suited and -configured for AI in this study. Having the ability of sending real-time push notifications to IoT clients, the desktop computer implements Google-maintained Firebase Cloud Messaging to engage IoT clients across Android/iOS devices and provide mobile notification service to residential/industrial DSM. In this study, in order to realize edge intelligence that edge devices avoiding network latency and much-needed connectivity of Internet connections for Internet of Services can support secure access to data stores and provide immediate analytical and real-time actionable insights at the edge of the network, we upgrade the designed and implemented smart sockets to be embedded AI Arduino ones (called embedded AIduino). With the realization of edge analytics by the proposed embedded AIduino for data analytics, an Arduino Ethernet shield WizNet W5100 having a micro SD card connector is conducted and used. The SD library is included for reading parsed data from and writing parsed data to an SD card. And, an Artificial Neural Network library, ArduinoANN, for Arduino MEGA is imported and used for locally-embedded AI implementation. The embedded AIduino in this study can be developed for further applications in manufacturing industry energy management and sustainable energy management, wherein in sustainable energy management rotating machinery diagnostics works to identify energy loss from gross misalignment and unbalance of rotating machines in power plants as an example.

Keywords: demand-side management, edge intelligence, energy management system, fault detection and classification

Procedia PDF Downloads 228
1726 Understanding Everyday Insecurities Emerging from Fragmented Territorial Control in Post-Accord Colombia

Authors: Clara Voyvodic

Abstract:

Transitions from conflict to peace are by no means smooth nor linear, particularly from the perspective of those living through them. Over the last few decades, the changing focus in peacebuilding studies has come to appreciate the everyday experience of communities and how that provides a lens through which the relative success or efficacy of these transitions can be understood. In particular, the demobilization of a significant conflict actor is not without consequences, not just for the macro-view of state stabilization and peace, but for the communities who find themselves without a clear authority of territorial control. In Colombia, the demobilization and disarmament of the FARC guerilla group provided a brief respite to the conflict and a major political win for President Manuel Santos. However, this victory has proven short-lived. Drawing from extensive field research in Colombia within the last year, including interviews with local communities and actors operating in these regions, field observations, and other primary resources, this paper examines the post-accord transitions in Colombia and the everyday security experiences of local communities in regions formerly controlled by the FARC. In order to do so, the research focused on a semi-ethnographic approach in the northern region of the department of Antioquia and the coastal area of the border department of Nariño that documented how individuals within these marginalized communities have come to understand and negotiate their security in the years following the accord and the demobilization of the FARC. This presentation will argue that the removal of the FARC as an informal governance actor opened a space for multiple actors to attempt to control the same territory, including the state. This shift has had a clear impact on the everyday security experiences of the local communities. With an exploration of the dynamics of local governance and its impact on lived security experiences, this research seeks to demonstrate how distinct patterns of armed group behavior are emerging not only from a vacuum of control left by the FARC but from an increase in state presence that nonetheless remains inconsistent and unpersuasive as a monopoly of force in the region. The increased multiplicity of actors, particularly the state, has meant that the normal (informal) rules for communities to navigate these territories are no longer in play as the identities, actions, and intentions of different competing groups have become frustratingly opaque. This research provides a prescient analysis on how the shifting dynamics of territorial control in a post-peace accord landscape produce uncertain realities that affect the daily lives of the local communities and endanger the long-term prospect of human-centered security.

Keywords: armed actors, conflict transitions, informal governance, post-accord, security experiences

Procedia PDF Downloads 108
1725 Cross-Sectional Association between Socio-Demographic Factors and Paid Blood Donation in Half Million Chinese Population

Authors: Jiashu Shen, Guoting Zhang, Zhicheng Wang, Yu Wang, Yun Liang, Siyu Zou, Fan Yang, Kun Tang

Abstract:

Objectives: This study aims to enhance the understanding of paid blood donors’ characteristics in Chinese population and devise strategies to protect these paid donors. Background: Paid blood donation was the predominant mode of blood donation in China from the 1970s to 1998 and caused several health and social problems including largely increased the risk of infectious diseases with nonstandard operation in unhygienic conditions. Methods: This study utilized the cross-sectional data from the China Kadoorie Biobank with about 0.5 million people from 10 regions of China from 2004 to 2008. Multivariable logistic regression was performed to examine the associations between socio-demographic factors and paid blood donation. Furthermore, a stratified analysis was applied in education level and annual household income by rural and urban areas. Results: The prevalence of paid blood donation was 0.50% in China and males were more likely to donate blood than females (Adjusted odds ratio (AOR) =0.81, 95%Confident Intervals (CI): 0.75-0.88). Urban people had much lower odds than rural people (AOR =0.24, 95%CI: 0.21-0.27). People with a high annual household income had lower odds of paid blood donation compared with that of people with low income (AOR=0.37, 95%CI: 0.31-0.44). Compared with people who didn’t receive school education, people in a higher level of education had increased odds of paid blood donation (AOR=2.31, 95%CI: 1.94-2.74). Conclusion: Paid blood donors in China were associated with those who were males, living in rural areas, with low annual household income and educational background.

Keywords: China Kadoorie Biobank, Chinese population, paid blood donation, socio-demographic factors

Procedia PDF Downloads 130
1724 Optical and Near-UV Spectroscopic Properties of Low-Redshift Jetted Quasars in the Main Sequence in the Main Sequence Context

Authors: Shimeles Terefe Mengistue, Ascensión Del Olmo, Paola Marziani, Mirjana Pović, María Angeles Martínez-Carballo, Jaime Perea, Isabel M. Árquez

Abstract:

Quasars have historically been classified into two distinct classes, radio-loud (RL) and radio-quiet (RQ), taking into account the presence and absence of relativistic radio jets, respectively. The absence of spectra with a high S/N ratio led to the impression that all quasars (QSOs) are spectroscopically similar. Although different attempts were made to unify these two classes, there is a long-standing open debate involving the possibility of a real physical dichotomy between RL and RQ quasars. In this work, we present new high S/N spectra of 11 extremely powerful jetted quasars with radio-to-optical flux density ratio > 1000 that concomitantly cover the low-ionization emission of Mgii𝜆2800 and Hbeta𝛽 as well as the Feii blends in the redshift range 0.35 < z < 1, observed at Calar Alto Observatory (Spain). This work aims to quantify broad emission line differences between RL and RQ quasars by using the four-dimensional eigenvector 1 (4DE1) parameter space and its main sequence (MS) and to check the effect of powerful radio ejection on the low ionization broad emission lines. Emission lines are analysed by making two complementary approaches, a multicomponent non-linear fitting to account for the individual components of the broad emission lines and by analysing the full profile of the lines through parameters such as total widths, centroid velocities at different fractional intensities, asymmetry, and kurtosis indices. It is found that broad emission lines show large reward asymmetry both in Hbeta𝛽 and Mgii2800A. The location of our RL sources in a UV plane looks similar to the optical one, with weak Feii UV emission and broad Mgii2800A. We supplement the 11 sources with large samples from previous work to gain some general inferences. The result shows, compared to RQ, our extreme RL quasars show larger median Hbeta full width at half maximum (FWHM), weaker Feii emission, larger 𝑀BH, lower 𝐿bol/𝐿Edd, and a restricted space occupation in the optical and UV MS planes. The differences are more elusive when the comparison is carried out by restricting the RQ population to the region of the MS occupied by RL quasars, albeit an unbiased comparison matching 𝑀BH and 𝐿bol/𝐿Edd suggests that the most powerful RL quasars show the highest redward asymmetries in Hbeta.

Keywords: galaxies, active, line, profiles, quasars, emission lines, supermassive black holes

Procedia PDF Downloads 33
1723 Role of Microplastics on Reducing Heavy Metal Pollution from Wastewater

Authors: Derin Ureten

Abstract:

Plastic pollution does not disappear, it gets smaller and smaller through photolysis which are caused mainly by sun’s radiation, thermal oxidation, thermal degradation, and biodegradation which is the action of organisms digesting larger plastics. All plastic pollutants have exceedingly harmful effects on the environment. Together with the COVID-19 pandemic, the number of plastic products such as masks and gloves flowing into the environment has increased more than ever. However, microplastics are not the only pollutants in water, one of the most tenacious and toxic pollutants are heavy metals. Heavy metal solutions are also capable of causing varieties of health problems in organisms such as cancer, organ damage, nervous system damage, and even death. The aim of this research is to prove that microplastics can be used in wastewater treatment systems by proving that they could adsorb heavy metals in solutions. Experiment for this research will include two heavy metal solutions; one including microplastics in a heavy metal contaminated water solution, and one that just includes heavy metal solution. After being sieved, absorbance of both mediums will be measured with the help of a spectrometer. Iron (III) chloride (FeCl3) will be used as the heavy metal solution since the solution becomes darker as the presence of this substance increases. The experiment will be supported by Pure Nile Red powder in order to observe if there are any visible differences under the microscope. Pure Nile Red powder is a chemical that binds to hydrophobic materials such as plastics and lipids. If proof of adsorbance could be observed by the rates of the solutions' final absorbance rates and visuals ensured by the Pure Nile Red powder, the experiment will be conducted with different temperature levels in order to analyze the most accurate temperature level to proceed with removal of heavy metals from water. New wastewater treatment systems could be generated with the help of microplastics, for water contaminated with heavy metals.

Keywords: microplastics, heavy metal, pollution, adsorbance, wastewater treatment

Procedia PDF Downloads 54
1722 Biogas Production from Lake Bottom Biomass from Forest Management Areas

Authors: Dessie Tegegne Tibebu, Kirsi Mononen, Ari Pappinen

Abstract:

In areas with forest management, agricultural, and industrial activity, sediments and biomass are accumulated in lakes through drainage system, which might be a cause for biodiversity loss and health problems. One possible solution can be utilization of lake bottom biomass and sediments for biogas production. The main objective of this study was to investigate the potentials of lake bottom materials for production of biogas by anaerobic digestion and to study the effect of pretreatment methods for feed materials on biogas yield. In order to study the potentials of biogas production lake bottom materials were collected from two sites, Likokanta and Kutunjärvi lake. Lake bottom materials were mixed with straw-horse manure to produce biogas in a laboratory scale reactor. The results indicated that highest yields of biogas values were observed when feeds were composed of 50% lake bottom materials with 50% straw horse manure mixture-while with above 50% lake bottom materials in the feed biogas production decreased. CH4 content from Likokanta lake materials with straw-horse manure and Kutunjärvi lake materials with straw-horse manure were similar values when feed consisted of 50% lake bottom materials with 50% straw horse manure mixtures. However, feeds with lake bottom materials above 50%, the CH4 concentration started to decrease, impairing gas process. Pretreatment applied on Kutunjärvi lake materials showed a slight negative effect on the biogas production and lowest CH4 concentration throughout the experiment. The average CH4 production (ml g-1 VS) from pretreated Kutunjärvi lake materials with straw horse manure (208.9 ml g-1 VS) and untreated Kutunjärvi lake materials with straw horse manure (182.2 ml g-1 VS) were markedly higher than from Likokanta lake materials with straw horse manure (157.8 ml g-1 VS). According to the experimental results, utilization of 100% lake bottom materials for biogas production is likely to be impaired negatively. In the future, further analyses to improve the biogas yields, assessment of costs and benefits is needed before utilizing lake bottom materials for the production of biogas.

Keywords: anaerobic digestion, biogas, lake bottom materials, sediments, pretreatment

Procedia PDF Downloads 295
1721 Chromatographic Preparation and Performance on Zinc Ion Imprinted Monolithic Column and Its Adsorption Property

Authors: X. Han, S. Duan, C. Liu, C. Zhou, W. Zhu, L. Kong

Abstract:

The ionic imprinting technique refers to the three-dimensional rigid structure with the fixed pore sizes, which was formed by the binding interactions of ions and functional monomers and used ions as the template, it has a high level of recognition to the ionic template. The preparation of monolithic column by the in-situ polymerization need to put the compound of template, functional monomers, cross-linking agent and initiating agent into the solution, dissolve it and inject to the column tube, and then the compound will have a polymerization reaction at a certain temperature, after the synthetic reaction, we washed out the unread template and solution. The monolithic columns are easy to prepare, low consumption and cost-effective with fast mass transfer, besides, they have many chemical functions. But the monolithic columns have some problems in the practical application, such as low-efficiency, quantitative analysis cannot be performed accurately because of the peak shape is wide and has tailing phenomena; the choice of polymerization systems is limited and the lack of theoretical foundations. Thus the optimization of components and preparation methods is an important research direction. During the preparation of ionic imprinted monolithic columns, pore-forming agent can make the polymer generate the porous structure, which can influence the physical properties of polymer, what’ s more, it can directly decide the stability and selectivity of polymerization reaction. The compounds generated in the pre-polymerization reaction could directly decide the identification and screening capabilities of imprinted polymer; thus the choice of pore-forming agent is quite critical in the preparation of imprinted monolithic columns. This article mainly focuses on the research that when using different pore-forming agents, the impact of zinc ion imprinted monolithic column on the enrichment performance of zinc ion.

Keywords: high performance liquid chromatography (HPLC), ionic imprinting, monolithic column, pore-forming agent

Procedia PDF Downloads 191
1720 Homeostatic Analysis of the Integrated Insulin and Glucagon Signaling Network: Demonstration of Bistable Response in Catabolic and Anabolic States

Authors: Pramod Somvanshi, Manu Tomar, K. V. Venkatesh

Abstract:

Insulin and glucagon are responsible for homeostasis of key plasma metabolites like glucose, amino acids and fatty acids in the blood plasma. These hormones act antagonistically to each other during the secretion and signaling stages. In the present work, we analyze the effect of macronutrients on the response from integrated insulin and glucagon signaling pathways. The insulin and glucagon pathways are connected by DAG (a calcium signaling component which is part of the glucagon signaling module) which activates PKC and inhibits IRS (insulin signaling component) constituting a crosstalk. AKT (insulin signaling component) inhibits cAMP (glucagon signaling component) through PDE3 forming the other crosstalk between the two signaling pathways. Physiological level of anabolism and catabolism is captured through a metric quantified by the activity levels of AKT and PKA in their phosphorylated states, which represent the insulin and glucagon signaling endpoints, respectively. Under resting and starving conditions, the phosphorylation metric represents homeostasis indicating a balance between the anabolic and catabolic activities in the tissues. The steady state analysis of the integrated network demonstrates the presence of a bistable response in the phosphorylation metric with respect to input plasma glucose levels. This indicates that two steady state conditions (one in the homeostatic zone and other in the anabolic zone) are possible for a given glucose concentration depending on the ON or OFF path. When glucose levels rise above normal, during post-meal conditions, the bistability is observed in the anabolic space denoting the dominance of the glycogenesis in liver. For glucose concentrations lower than the physiological levels, while exercising, metabolic response lies in the catabolic space denoting the prevalence of glycogenolysis in liver. The non-linear positive feedback of AKT on IRS in insulin signaling module of the network is the main cause of the bistable response. The span of bistability in the phosphorylation metric increases as plasma fatty acid and amino acid levels rise and eventually the response turns monostable and catabolic representing diabetic conditions. In the case of high fat or protein diet, fatty acids and amino acids have an inhibitory effect on the insulin signaling pathway by increasing the serine phosphorylation of IRS protein via the activation of PKC and S6K, respectively. Similar analysis was also performed with respect to input amino acid and fatty acid levels. This emergent property of bistability in the integrated network helps us understand why it becomes extremely difficult to treat obesity and diabetes when blood glucose level rises beyond a certain value.

Keywords: bistability, diabetes, feedback and crosstalk, obesity

Procedia PDF Downloads 242
1719 Discrete-Event Modeling and Simulation Methodologies: Past, Present and Future

Authors: Gabriel Wainer

Abstract:

Modeling and Simulation methods have been used to better analyze the behavior of complex physical systems, and it is now common to use simulation as a part of the scientific and technological discovery process. M&S advanced thanks to the improvements in computer technology, which, in many cases, resulted in the development of simulation software using ad-hoc techniques. Formal M&S appeared in order to try to improve the development task of very complex simulation systems. Some of these techniques proved to be successful in providing a sound base for the development of discrete-event simulation models, improving the ease of model definition and enhancing the application development tasks; reducing costs and favoring reuse. The DEVS formalism is one of these techniques, which proved to be successful in providing means for modeling while reducing development complexity and costs. DEVS model development is based on a sound theoretical framework. The independence of M&S tasks made possible to run DEVS models on different environments (personal computers, parallel computers, real-time equipment, and distributed simulators) and middleware. We will present a historical perspective of discrete-event M&S methodologies, showing different modeling techniques. We will introduce DEVS origins and general ideas, and compare it with some of these techniques. We will then show the current status of DEVS M&S, and we will discuss a technological perspective to solve current M&S problems (including real-time simulation, interoperability, and model-centered development techniques). We will show some examples of the current use of DEVS, including applications in different fields. We will finally show current open topics in the area, which include advanced methods for centralized, parallel or distributed simulation, the need for real-time modeling techniques, and our view in these fields.

Keywords: modeling and simulation, discrete-event simulation, hybrid systems modeling, parallel and distributed simulation

Procedia PDF Downloads 299
1718 Design and Development of a Mechanical Force Gauge for the Square Watermelon Mold

Authors: Morteza Malek Yarand, Hadi Saebi Monfared

Abstract:

This study aimed at designing and developing a mechanical force gauge for the square watermelon mold for the first time. It also tried to introduce the square watermelon characteristics and its production limitations. The mechanical force gauge performance and the product itself were also described. There are three main designable gauge models: a. hydraulic gauge, b. strain gauge, and c. mechanical gauge. The advantage of the hydraulic model is that it instantly displays the pressure and thus the force exerted by the melon. However, considering the inability to measure forces at all directions, complicated development, high cost, possible hydraulic fluid leak into the fruit chamber and the possible influence of increased ambient temperature on the fluid pressure, the development of this gauge was overruled. The second choice was to calculate pressure using the direct force a strain gauge. The main advantage of these strain gauges over spring types is their high precision in measurements; but with regard to the lack of conformity of strain gauge working range with water melon growth, calculations were faced with problems. Finally the mechanical pressure gauge has advantages, including the ability to measured forces and pressures on the mold surface during melon growth; the ability to display the peak forces; the ability to produce melon growth graph thanks to its continuous force measurements; the conformity of its manufacturing materials with the required physical conditions of melon growth; high air conditioning capability; the ability to permit sunlight reaches the melon rind (no yellowish skin and quality loss); fast and straightforward calibration; no damages to the product during assembling and disassembling; visual check capability of the product within the mold; applicable to all growth environments (field, greenhouses, etc.); simple process; low costs and so forth.

Keywords: mechanical force gauge, mold, reshaped fruit, square watermelon

Procedia PDF Downloads 248
1717 Study of Mobile Game Addiction Using Electroencephalography Data Analysis

Authors: Arsalan Ansari, Muhammad Dawood Idrees, Maria Hafeez

Abstract:

Use of mobile phones has been increasing considerably over the past decade. Currently, it is one of the main sources of communication and information. Initially, mobile phones were limited to calls and messages, but with the advent of new technology smart phones were being used for many other purposes including video games. Despite of positive outcomes, addiction to video games on mobile phone has become a leading cause of psychological and physiological problems among many people. Several researchers examined the different aspects of behavior addiction with the use of different scales. Objective of this study is to examine any distinction between mobile game addicted and non-addicted players with the use of electroencephalography (EEG), based upon psycho-physiological indicators. The mobile players were asked to play a mobile game and EEG signals were recorded by BIOPAC equipment with AcqKnowledge as data acquisition software. Electrodes were places, following the 10-20 system. EEG was recorded at sampling rate of 200 samples/sec (12,000samples/min). EEG recordings were obtained from the frontal (Fp1, Fp2), parietal (P3, P4), and occipital (O1, O2) lobes of the brain. The frontal lobe is associated with behavioral control, personality, and emotions. The parietal lobe is involved in perception, understanding logic, and arithmetic. The occipital lobe plays a role in visual tasks. For this study, a 60 second time window was chosen for analysis. Preliminary analysis of the signals was carried out with Acqknowledge software of BIOPAC Systems. From the survey based on CGS manual study 2010, it was concluded that five participants out of fifteen were in addictive category. This was used as prior information to group the addicted and non-addicted by physiological analysis. Statistical analysis showed that by applying clustering analysis technique authors were able to categorize the addicted and non-addicted players specifically on theta frequency range of occipital area.

Keywords: mobile game, addiction, psycho-physiology, EEG analysis

Procedia PDF Downloads 137
1716 On-Ice Force-Velocity Modeling Technical Considerations

Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra

Abstract:

Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.   

Keywords: ice-hockey, sprint, skating, power

Procedia PDF Downloads 64
1715 Design and Assessment of Traffic Management Strategies for Improved Mobility on Major Arterial Roads in Lahore City

Authors: N. Ali, S. Nakayama, H. Yamaguchi, M. Nadeem

Abstract:

Traffic congestion is a matter of prime concern in developing countries. This can be primarily attributed due to poor design practices and biased allocation of resources based on political will neglecting the technical feasibilities in infrastructure design. During the last decade, Lahore has expanded at an unprecedented rate as compared to surrounding cities due to more funding and resource allocation by the previous governments. As a result of this, people from surrounding cities and areas moved to the Lahore city for better opportunities and quality of life. This migration inflow inherited the city with an increased population yielding the inefficiency of the existing infrastructure to accommodate enhanced traffic demand. This leads to traffic congestion on major arterial roads of the city. In this simulation study, a major arterial road was selected to evaluate the performance of the five intersections by changing the geometry of the intersections or signal control type. Simulations were done in two software; Highway Capacity Software (HCS) and Synchro Studio and Sim Traffic Software. Some of the traffic management strategies that were employed include actuated-signal control, semi-actuated signal control, fixed-time signal control, and roundabout. The most feasible solution for each intersection in the above-mentioned traffic management techniques was selected with the least delay time (seconds) and improved Level of Service (LOS). The results showed that Jinnah Hospital Intersection and Akbar Chowk Intersection improved 92.97% and 92.67% in delay time reduction, respectively. These results can be used by traffic planners and policy makers for decision making for the expansion of these intersections keeping in mind the traffic demand in future years.

Keywords: traffic congestion, traffic simulation, traffic management, congestion problems

Procedia PDF Downloads 447
1714 Evaluation of Classification Algorithms for Diagnosis of Asthma in Iranian Patients

Authors: Taha SamadSoltani, Peyman Rezaei Hachesu, Marjan GhaziSaeedi, Maryam Zolnoori

Abstract:

Introduction: Data mining defined as a process to find patterns and relationships along data in the database to build predictive models. Application of data mining extended in vast sectors such as the healthcare services. Medical data mining aims to solve real-world problems in the diagnosis and treatment of diseases. This method applies various techniques and algorithms which have different accuracy and precision. The purpose of this study was to apply knowledge discovery and data mining techniques for the diagnosis of asthma based on patient symptoms and history. Method: Data mining includes several steps and decisions should be made by the user which starts by creation of an understanding of the scope and application of previous knowledge in this area and identifying KD process from the point of view of the stakeholders and finished by acting on discovered knowledge using knowledge conducting, integrating knowledge with other systems and knowledge documenting and reporting.in this study a stepwise methodology followed to achieve a logical outcome. Results: Sensitivity, Specifity and Accuracy of KNN, SVM, Naïve bayes, NN, Classification tree and CN2 algorithms and related similar studies was evaluated and ROC curves were plotted to show the performance of the system. Conclusion: The results show that we can accurately diagnose asthma, approximately ninety percent, based on the demographical and clinical data. The study also showed that the methods based on pattern discovery and data mining have a higher sensitivity compared to expert and knowledge-based systems. On the other hand, medical guidelines and evidence-based medicine should be base of diagnostics methods, therefore recommended to machine learning algorithms used in combination with knowledge-based algorithms.

Keywords: asthma, datamining, classification, machine learning

Procedia PDF Downloads 424
1713 Studies on Space-Based Laser Targeting System for the Removal of Orbital Space Debris

Authors: Krima M. Rohela, Raja Sabarinath Sundaralingam

Abstract:

Humans have been launching rockets since the beginning of the space age in the late 1950s. We have come a long way since then, and the success rate for the launch of rockets has increased considerably. With every successful launch, there is a large amount of junk or debris which is released into the upper layers of the atmosphere. Space debris has been a huge concern for a very long time now. This includes the rocket shells released from the launch and the parts of defunct satellites. Some of this junk will come to fall towards the Earth and burn in the atmosphere. But most of the junk goes into orbit around the Earth, and they remain in orbits for at least 100 years. This can cause a lot of problems to other functioning satellites and may affect the future manned missions to space. The main concern of the space-debris is the increase in space activities, which leads to risks of collisions if not taken care of soon. These collisions may result in what is known as Kessler Syndrome. This debris can be removed by a space-based laser targeting system. Hence, the matter is investigated and discussed. The first step in this involves launching a satellite with a high-power laser device into space, above the debris belt. Then the target material is ablated with a focussed laser beam. This step of the process is highly dependent on the attitude and orientation of the debris with respect to the Earth and the device. The laser beam will cause a jet of vapour and plasma to be expelled from the material. Hence, the force is applied in the opposite direction, and in accordance with Newton’s third law of motion, this will cause the material to move towards the Earth and get pulled down due to gravity, where it will get disintegrated in the upper layers of the atmosphere. The larger pieces of the debris can be directed towards the oceans. This method of removal of the orbital debris will enable safer passage for future human-crewed missions into space.

Keywords: altitude, Kessler syndrome, laser ablation, Newton’s third law of motion, satellites, Space debris

Procedia PDF Downloads 112
1712 On-Line Super Critical Fluid Extraction, Supercritical Fluid Chromatography, Mass Spectrometry, a Technique in Pharmaceutical Analysis

Authors: Narayana Murthy Akurathi, Vijaya Lakshmi Marella

Abstract:

The literature is reviewed with regard to online Super critical fluid extraction (SFE) coupled directly with supercritical fluid chromatography (SFC) -mass spectrometry that have typically more sensitive than conventional LC-MS/MS and GC-MS/MS. It is becoming increasingly interesting to use on-line techniques that combine sample preparation, separation and detection in one analytical set up. This provides less human intervention, uses small amount of sample and organic solvent and yields enhanced analyte enrichment in a shorter time. The sample extraction is performed under light shielding and anaerobic conditions, preventing the degradation of thermo labile analytes. It may be able to analyze compounds over a wide polarity range as SFC generally uses carbon dioxide which was collected as a by-product of other chemical reactions or is collected from the atmosphere as it contributes no new chemicals to the environment. The diffusion of solutes in supercritical fluids is about ten times greater than that in liquids and about three times less than in gases which results in a decrease in resistance to mass transfer in the column and allows for fast high resolution separations. The drawback of SFC when using carbon dioxide as mobile phase is that the direct introduction of water samples poses a series of problems, water must therefore be eliminated before it reaches the analytical column. Hundreds of compounds analysed simultaneously by simple enclosing in an extraction vessel. This is mainly applicable for pharmaceutical industry where it can analyse fatty acids and phospholipids that have many analogues as their UV spectrum is very similar, trace additives in polymers, cleaning validation can be conducted by putting swab sample in an extraction vessel, analysing hundreds of pesticides with good resolution.

Keywords: super critical fluid extraction (SFE), super critical fluid chromatography (SFC), LCMS/MS, GCMS/MS

Procedia PDF Downloads 367
1711 Mental Accounting Theory Development Review and Application

Authors: Kang-Hsien Li

Abstract:

Along with global industries in using technology to enhance the application, make the study drawn more close to the people’s behavior and produce data analysis, extended out from the mental accounting of prospect theory, this paper provides the marketing and financial applications in the field of exploration and discussions with the future. For the foreseeable future, the payment behavior depends on the form of currency, which affects a variety of product types on the marketing of marketing strategy to provide diverse payment methods to enhance the overall sales performance. This not only affects people's consumption also affects people's investments. Credit card, PayPal, Apple pay, Bitcoin and any other with advances in technology and other emerging payment instruments, began to affect people for the value and the concept of money. Such as the planning of national social welfare policies, monetary and financial regulators and regulators. The expansion can be expected to discuss marketing and finance-related mental problems at the same time, recent studies reflect two different ideas, the first idea is that individuals affected by situational frames, not broad impact at the event level, affected by the people basically mental, second idea is that when an individual event affects a broader range, and majority of people will choose the same at the time that the rational choice. That are applied to practical application of marketing, at the same time provide an explanation in the financial market under the anomalies, due to the financial markets has varied investment products and different market participants, that also highlights these two points. It would provide in-depth description of humanity's mental. Certainly, about discuss mental accounting aspects, while artificial intelligence application development, although people would be able to reduce prejudice decisions, that will also lead to more discussion on the economic and marketing strategy.

Keywords: mental accounting, behavior economics, consumer behaviors, decision-making

Procedia PDF Downloads 428
1710 Environmental Accounting Practice: Analyzing the Extent and Qualification of Environmental Disclosures of Turkish Companies Located in BIST-XKURY Index

Authors: Raif Parlakkaya, Mustafa Nihat Demirci, Mehmet Nuri Salur

Abstract:

Environmental pollution has detrimental effects on the quality of our life and its scope has reached such an extent that measures are being taken both at the national and international levels to reduce, prevent and mitigate its impact on social, economic and political spheres. Therefore, awareness of environmental problems has been increasing among stakeholders and accordingly among companies. It is seen that corporate reporting is expanding beyond environmental performance. Primary purpose of publishing an environmental report is to provide specific audiences with useful, meaningful information. This paper is intended to analyze the extent and qualification of environmental disclosures of Turkish publicly quoted firms and see how it varies from one sector to another. The data for the study were collected from annual activity reports of companies, listed on the corporate governance index (BIST-XKURY) of Istanbul Stock Exchange. Content analysis was the research methodology used to measure the extent of environmental disclosure. Accordingly, 2015 annual activity reports of companies that carry out business in some particular fields were acquired from Capital Market Board, websites of Public Disclosure Platform and companies’ own websites. These reports were categorized into five main aspects: Environmental policies, environmental management systems, environmental protection and conservation activities, environmental awareness and information on environmental lawsuits. Subsequently, each component was divided into several variables related to what each firm is supposed to disclose about environmental information. In this context, the nature and scope of the information disclosed on each item were assessed according to five different ways (N.I: No Information; G.E.: General Explanations; Q.E.: Qualitative Detailed Explanations; N.E.: Quantitative (numerical) Detailed Explanations; Q.&N.E.: Both Qualitative and Quantitative Explanations).

Keywords: environmental accounting, disclosure, corporate governance, content analysis

Procedia PDF Downloads 235