Search results for: java code generation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4816

Search results for: java code generation

3226 Effects of Hydroxysafflor Yellow a (HSYA) on UVA-Induced Damage in HaCaT Keratinocytes

Authors: Szu-Chieh Yu, Pei-Chin Chiand, Chih-Yi Lin, Yi-Wen Chien

Abstract:

UV radiation from sunlight cause numbers of acute and chronic skin damage which can result in inflammation, immune changes, physical changes and DNA damage that facilitates skin aging and the development of skin carcinogenesis. Reactive oxygen species (ROS) are generated by excessive solar UV radiation, resulting in oxidative damage to cellar components, proteins, lipids, and nucleic acids. Thus, antioxidation plays an important role that protects skin against ROS-induced injury. Safflower (Carthamus tinctorius L.) is an important Chinese medicine contained abundance flavones and hydroxysafflor yellow A (HSYA) which is main active ingredient. HSYA is part of quinochalcone and has unique structures of hydroxy groups that provided the antioxidant effect. In this study, the aim was to investigate the protective role of HYSA in human keratinocytes (HaCaT) against UVA-induced oxidative damage and the possible mechanism. The HaCaT cells were UVA-irradiated and the effects of HYSA on cell viability, reactive oxygen species generation, DNA fragmentation and lipid peroxidation were measured. The mRNA expression of matrix metalloproteinase Ι (MMP Ι), cyclooxygenase-2 (COX-2) were determined by RT-PCR. In this study, UVA exposure lead to decrease in cell viability and increase in reactive oxygen species generation in HaCaT cells. HYSA could effectively increase the viability of HaCaT cells after UVA exposure and protect them from UVA-induced oxidative stress. Moreover, HYSA can reduce inflammation through inhibition the mRNA expression of MMP Ι and COX-2. Our results suggest that HSYA can act as a free radical scavenger while keratinocytes were photodamaged. HYSA could be a useful natural medicine for the protection of epidermal cells from UVA-induced damage and will be developed into products for skin care.

Keywords: HaCaT keratinocytes, hydroxysafflor yellow A (HSYA), MMP Ι, oxidative stress

Procedia PDF Downloads 366
3225 Quality and Coverage Assessment in Software Integration Based On Mutation Testing

Authors: Iyad Alazzam, Kenneth Magel, Izzat Alsmadi

Abstract:

The different activities and approaches in software testing try to find the most possible number of errors or failures with the least amount of possible effort. Mutation is a testing approach that is used to discover possible errors in tested applications. This is accomplished through changing one aspect of the software from its original and writes test cases to detect such change or mutation. In this paper, we present a mutation approach for testing software components integration aspects. Several mutation operations related to components integration are described and evaluated. A test case study of several open source code projects is collected. Proposed mutation operators are applied and evaluated. Results showed some insights and information that can help testing activities in detecting errors and improving coverage.

Keywords: software testing, integration testing, mutation, coverage, software design

Procedia PDF Downloads 405
3224 Analyzing the Water Quality of Settling Pond after Revegetation at Ex-Mining Area

Authors: Iis Diatin, Yani Hadiroseyani, Muhammad Mujahid, Ahmad Teduh, Juang R. Matangaran

Abstract:

One of silica quarry managed by a mining company is located at Sukabumi District of West Java Province Indonesia with an area of approximately 70 hectares. Since 2013 this company stopped the mining activities. The company tries to restore the ecosystem post-mining with rehabilitation activities such as reclamation and revegetation of their ex-mining area. After three years planting the area the trees grown well. Not only planting some tree species but also some cover crop has covered the soil surface. There are two settling ponds located in the middle of the ex-mining area. Those settling pond were built in order to prevent the effect of acid mine drainage. Acid mine drainage (AMD) or the acidic water is created when sulphide minerals are exposed to air and water and through a natural chemical reaction produce sulphuric acid. AMD is the main pollutant at the open pit mining. The objective of the research was to analyze the effect of revegetation on water quality change at the settling pond. The physical and chemical of water quality parameter were measured and analysed at site and at the laboratory. Physical parameter such as temperature, turbidity and total organic matter were analyse. Also heavy metal and some other chemical parameter such as dissolved oxygen, alkalinity, pH, total ammonia nitrogen, nitrate and nitrite were analysed. The result showed that the acidity of first settling pond was higher than that of the second settling pond. Both settling pond water’s contained heavy metal. The turbidity and total organic matter were the parameter of water quality which become better after revegetation.

Keywords: acid mine drainage, ex-mining area, revegetation, settling pond, water quality

Procedia PDF Downloads 284
3223 The Application of a Neural Network in the Reworking of Accu-Chek to Wrist Bands to Monitor Blood Glucose in the Human Body

Authors: J. K Adedeji, O. H Olowomofe, C. O Alo, S.T Ijatuyi

Abstract:

The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.

Keywords: Accu-Check, diabetes, neural network, pattern recognition

Procedia PDF Downloads 131
3222 Materials for Sustainability

Authors: Qiuying Li

Abstract:

It is a shared opinion that sustainable development requires a system discontinuity, meaning that radical changes in the way we produce and consume are needed. Within this framework there is an emerging understanding that an important contribution to this change can be directly linked to decisions taken in the design phase of products, services and systems. Design schools have therefore to be able to provide design students with a broad knowledge and effective Design for Sustainability tools, in order to enable a new generation of designers in playing an active role in reorienting our consumption and production patterns.

Keywords: design for sustainability, services, systems, materials, ecomaterials

Procedia PDF Downloads 417
3221 An Encapsulation of a Navigable Tree Position: Theory, Specification, and Verification

Authors: Nicodemus M. J. Mbwambo, Yu-Shan Sun, Murali Sitaraman, Joan Krone

Abstract:

This paper presents a generic data abstraction that captures a navigable tree position. The mathematical modeling of the abstraction encapsulates the current tree position, which can be used to navigate and modify the tree. The encapsulation of the tree position in the data abstraction specification avoids the use of explicit references and aliasing, thereby simplifying verification of (imperative) client code that uses the data abstraction. To ease the tasks of such specification and verification, a general tree theory, rich with mathematical notations and results, has been developed. The paper contains an example to illustrate automated verification ramifications. With sufficient tree theory development, automated proving seems plausible even in the absence of a special-purpose tree solver.

Keywords: automation, data abstraction, maps, specification, tree, verification

Procedia PDF Downloads 147
3220 The Nuclear Energy Museum in Brazil: Creative Solutions to Transform Science Education into Meaningful Learning

Authors: Denise Levy, Helen J. Khoury

Abstract:

Nuclear technology is a controversial issue among a great share of the Brazilian population. Misinformation and common wrong beliefs confuse public’s perceptions and the scientific community is expected to offer a wider perspective on the benefits and risks resulting from ionizing radiation in everyday life. Attentive to the need of new approaches between science and society, the Nuclear Energy Museum, in northeast Brazil, is an initiative created to communicate the growing impact of the beneficial applications of nuclear technology in medicine, industry, agriculture and electric power generation. Providing accessible scientific information, the museum offers a rich learning environment, making use of different educational strategies, such as films, interactive panels and multimedia learning tools, which not only increase the enjoyment of visitors, but also maximize their learning potential. Developed according to modern active learning instructional strategies, multimedia materials are designed to present the increasingly role of nuclear science in modern life, transforming science education into a meaningful learning experience. In year 2016, nine different interactive computer-based activities were developed, presenting curiosities about ionizing radiation in different landmarks around the world, such as radiocarbon dating works in Egypt, nuclear power generation in France and X-radiography of famous paintings in Italy. Feedback surveys have reported a high level of visitors’ satisfaction, proving the high quality experience in learning nuclear science at the museum. The Nuclear Energy Museum is the first and, up to the present time, the only permanent museum in Brazil devoted entirely to nuclear science.

Keywords: nuclear technology, multimedia learning tools, science museum, society and education

Procedia PDF Downloads 302
3219 Automatic Lexicon Generation for Domain Specific Dataset for Mining Public Opinion on China Pakistan Economic Corridor

Authors: Tayyaba Azim, Bibi Amina

Abstract:

The increase in the popularity of opinion mining with the rapid growth in the availability of social networks has attracted a lot of opportunities for research in the various domains of Sentiment Analysis and Natural Language Processing (NLP) using Artificial Intelligence approaches. The latest trend allows the public to actively use the internet for analyzing an individual’s opinion and explore the effectiveness of published facts. The main theme of this research is to account the public opinion on the most crucial and extensively discussed development projects, China Pakistan Economic Corridor (CPEC), considered as a game changer due to its promise of bringing economic prosperity to the region. So far, to the best of our knowledge, the theme of CPEC has not been analyzed for sentiment determination through the ML approach. This research aims to demonstrate the use of ML approaches to spontaneously analyze the public sentiment on Twitter tweets particularly about CPEC. Support Vector Machine SVM is used for classification task classifying tweets into positive, negative and neutral classes. Word2vec and TF-IDF features are used with the SVM model, a comparison of the trained model on manually labelled tweets and automatically generated lexicon is performed. The contributions of this work are: Development of a sentiment analysis system for public tweets on CPEC subject, construction of an automatic generation of the lexicon of public tweets on CPEC, different themes are identified among tweets and sentiments are assigned to each theme. It is worth noting that the applications of web mining that empower e-democracy by improving political transparency and public participation in decision making via social media have not been explored and practised in Pakistan region on CPEC yet.

Keywords: machine learning, natural language processing, sentiment analysis, support vector machine, Word2vec

Procedia PDF Downloads 131
3218 The Impact of the Core Competencies in Business Management to the Existence and Progress of Traditional Foods Business with the Case of Study: Gudeg Sagan Yogyakarta

Authors: Lutfi AuliaRahman, Hari Rizki Ananda

Abstract:

The traditional food is a typical food of a certain region that has a taste of its own unique and typically consumed by a society in certain areas, one of which is Gudeg, a regional specialties traditional food of Yogyakarta and Central Java which is made of young jackfruit cooked in coconut milk, edible with rice and served with thick coconut milk (areh), chicken, eggs, tofu and sambal goreng krecek. However, lately, the image of traditional food has declined among people, so with gudeg, which today's society, especially among young people, tend to prefer modern types of food such as fast food and some other foods that are popular. Moreover, traditional food usually only preferred by consumers of local communities and lack of demand by consumers from different areas for different tastes. Thus, the traditional food producers increasingly marginalized and their consumers are on the wane. This study aimed to evaluate the management used by producers of traditional food with a case study of Gudeg Sagan which located in the city of Yogyakarta, with the ability of their management in creating core competencies, which includes the competence of cost, competence of flexibility, competence of quality, competence of time, and value-based competence. And then, in addition to surviving and continuing to exist with the existing external environment, Gudeg Sagan can increase the number of consumers and also reach a broader segment of teenagers and adults as well as consumers from different areas. And finally, in this paper will be found positive impact on the creation of the core competencies of the existence and progress of the traditional food business based on case study of Gudeg Sagan.

Keywords: Gudeg Sagan, traditional food, core competencies, existence

Procedia PDF Downloads 235
3217 Leadership in the Era of AI: Growing Organizational Intelligence

Authors: Mark Salisbury

Abstract:

The arrival of artificially intelligent avatars and the automation they bring is worrying many of us, not only for our livelihood but for the jobs that may be lost to our kids. We worry about what our place will be as human beings in this new economy where much of it will be conducted online in the metaverse – in a network of 3D virtual worlds – working with intelligent machines. The Future of Leadership was written to address these fears and show what our place will be – the right place – in this new economy of AI avatars, automation, and 3D virtual worlds. But to be successful in this new economy, our job will be to bring wisdom to our workplace and the marketplace. And we will use AI avatars and 3D virtual worlds to do it. However, this book is about more than AI and the avatars that we will work with in the metaverse. It’s about building Organizational intelligence (OI) -- the capability of an organization to comprehend and create knowledge relevant to its purpose; in other words, it is the intellectual capacity of the entire organization. To increase organizational intelligence requires a new kind of knowledge worker, a wisdom worker, that requires a new kind of leadership. This book begins your story for how to become a leader of wisdom workers and be successful in the emerging wisdom economy. After this presentation, conference participants will be able to do the following: Recognize the characteristics of the new generation of wisdom workers and how they differ from their predecessors. Recognize that new leadership methods and techniques are needed to lead this new generation of wisdom workers. Apply personal and professional values – personal integrity, belief in something larger than yourself, and keeping the best interest of others in mind – to improve your work performance and lead others. Exhibit an attitude of confidence, courage, and reciprocity of sharing knowledge to increase your productivity and influence others. Leverage artificial intelligence to accelerate your ability to learn, augment your decision-making, and influence others.Utilize new technologies to communicate with human colleagues and intelligent machines to develop better solutions more quickly.

Keywords: metaverse, generative artificial intelligence, automation, leadership, organizational intelligence, wisdom worker

Procedia PDF Downloads 26
3216 Investigating the Influences of Long-Term, as Compared to Short-Term, Phonological Memory on the Word Recognition Abilities of Arabic Readers vs. Arabic Native Speakers: A Word-Recognition Study

Authors: Insiya Bhalloo

Abstract:

It is quite common in the Muslim faith for non-Arabic speakers to be able to convert written Arabic, especially Quranic Arabic, into a phonological code without significant semantic or syntactic knowledge. This is due to prior experience learning to read the Quran (a religious text written in Classical Arabic), from a very young age such as via enrolment in Quranic Arabic classes. As compared to native speakers of Arabic, these Arabic readers do not have a comprehensive morpho-syntactic knowledge of the Arabic language, nor can understand, or engage in Arabic conversation. The study seeks to investigate whether mere phonological experience (as indicated by the Arabic readers’ experience with Arabic phonology and the sound-system) is sufficient to cause phonological-interference during word recognition of previously-heard words, despite the participants’ non-native status. Both native speakers of Arabic and non-native speakers of Arabic, i.e., those individuals that learned to read the Quran from a young age, will be recruited. Each experimental session will include two phases: An exposure phase and a test phase. During the exposure phase, participants will be presented with Arabic words (n=40) on a computer screen. Half of these words will be common words found in the Quran while the other half will be words commonly found in Modern Standard Arabic (MSA) but either non-existent or prevalent at a significantly lower frequency within the Quran. During the test phase, participants will then be presented with both familiar (n = 20; i.e., those words presented during the exposure phase) and novel Arabic words (n = 20; i.e., words not presented during the exposure phase. ½ of these presented words will be common Quranic Arabic words and the other ½ will be common MSA words but not Quranic words. Moreover, ½ the Quranic Arabic and MSA words presented will be comprised of nouns, while ½ the Quranic Arabic and MSA will be comprised of verbs, thereby eliminating word-processing issues affected by lexical category. Participants will then determine if they had seen that word during the exposure phase. This study seeks to investigate whether long-term phonological memory, such as via childhood exposure to Quranic Arabic orthography, has a differential effect on the word-recognition capacities of native Arabic speakers and Arabic readers; we seek to compare the effects of long-term phonological memory in comparison to short-term phonological exposure (as indicated by the presentation of familiar words from the exposure phase). The researcher’s hypothesis is that, despite the lack of lexical knowledge, early experience with converting written Quranic Arabic text into a phonological code will help participants recall the familiar Quranic words that appeared during the exposure phase more accurately than those that were not presented during the exposure phase. Moreover, it is anticipated that the non-native Arabic readers will also report more false alarms to the unfamiliar Quranic words, due to early childhood phonological exposure to Quranic Arabic script - thereby causing false phonological facilitatory effects.

Keywords: modern standard arabic, phonological facilitation, phonological memory, Quranic arabic, word recognition

Procedia PDF Downloads 339
3215 Solar Cell Packed and Insulator Fused Panels for Efficient Cooling in Cubesat and Satellites

Authors: Anand K. Vinu, Vaishnav Vimal, Sasi Gopalan

Abstract:

All spacecraft components have a range of allowable temperatures that must be maintained to meet survival and operational requirements during all mission phases. Due to heat absorption, transfer, and emission on one side, the satellite surface presents an asymmetric temperature distribution and causes a change in momentum, which can manifest in spinning and non-spinning satellites in different manners. This problem can cause orbital decays in satellites which, if not corrected, will interfere with its primary objective. The thermal analysis of any satellite requires data from the power budget for each of the components used. This is because each of the components has different power requirements, and they are used at specific times in an orbit. There are three different cases that are run, one is the worst operational hot case, the other one is the worst non-operational cold case, and finally, the operational cold case. Sunlight is a major source of heating that takes place on the satellite. The way in which it affects the spacecraft depends on the distance from the Sun. Any part of a spacecraft or satellite facing the Sun will absorb heat (a net gain), and any facing away will radiate heat (a net loss). We can use the state-of-the-art foldable hybrid insulator/radiator panel. When the panels are opened, that particular side acts as a radiator for dissipating the heat. Here the insulator, in our case, the aerogel, is sandwiched with solar cells and radiator fins (solar cells outside and radiator fins inside). Each insulated side panel can be opened and closed using actuators depending on the telemetry data of the CubeSat. The opening and closing of the panels are dependent on the special code designed for this particular application, where the computer calculates where the Sun is relative to the satellites. According to the data obtained from the sensors, the computer decides which panel to open and by how many degrees. For example, if the panels open 180 degrees, the solar panels will directly face the Sun, in turn increasing the current generator of that particular panel. One example is when one of the corners of the CubeSat is facing or if more than one side is having a considerable amount of sun rays incident on it. Then the code will analyze the optimum opening angle for each panel and adjust accordingly. Another means of cooling is the passive way of cooling. It is the most suitable system for a CubeSat because of its limited power budget constraints, low mass requirements, and less complex design. Other than this fact, it also has other advantages in terms of reliability and cost. One of the passive means is to make the whole chase act as a heat sink. For this, we can make the entire chase out of heat pipes and connect the heat source to this chase with a thermal strap that transfers the heat to the chassis.

Keywords: passive cooling, CubeSat, efficiency, satellite, stationary satellite

Procedia PDF Downloads 81
3214 Price Control: A Comprehensive Step to Control Corruption in the Society

Authors: Muhammad Zia Ullah Baig, Atiq Uz Zama

Abstract:

The motivation of the project is to facilitate the governance body, as well as the common man in his/her daily life consuming product rates, to easily monitor the expense, to control the budget with the help of single SMS (message), e-mail facility, and to manage governance body by task management system. The system will also be capable of finding irregularities being done by the concerned department in mitigating the complaints generated by the customer and also provide a solution to overcome problems. We are building a system that easily controls the price control system of any country, we will feeling proud to give this system free of cost to Indian Government also. The system is able to easily manage and control the price control department of government all over the country. Price control department run in different cities under City District Government, so the system easily run in different cities with different SMS Code and decentralize Database ensure the non-functional requirement of system (scalability, reliability, availability, security, safety). The customer request for the government official price list with respect to his/her city SMS code (price list of all city available on website or application), the server will forward the price list through a SMS, if the product is not available according to the price list the customer generate a complaint through an SMS or using website/smartphone application, complaint is registered in complaint database and forward to inspection department when the complaint is entertained, the inspection department will forward a message about the complaint to customer. Inspection department physically checks the seller who does not follow the price list, but the major issue of the system is corruption, may be inspection officer will take a bribe and resolve the complaint (complaint is fake) in that case the customer will not use the system. The major issue of the system is to distinguish the fake and real complain and fight for corruption in the department. To counter the corruption, our strategy is to rank the complain if the same type of complaint is generated the complaint is in high rank and the higher authority will also notify about that complain, now the higher authority of department have reviewed the complaint and its history, the officer who resolve that complaint in past and the action against the complaint, these data will help in decision-making process, if the complaint was resolved because the officer takes bribe, the higher authority will take action against that officer. When the price of any good is decided the market/former representative is also there, with the mutual understanding of both party the price is decided, the system facilitate the decision-making process. The system shows the price history of any goods, inflation rate, available supply, demand, and the gap between supply and demand, these data will help to allot for the decision-making process.

Keywords: price control, goods, government, inspection, department, customer, employees

Procedia PDF Downloads 399
3213 Generation of Knowlege with Self-Learning Methods for Ophthalmic Data

Authors: Klaus Peter Scherer, Daniel Knöll, Constantin Rieder

Abstract:

Problem and Purpose: Intelligent systems are available and helpful to support the human being decision process, especially when complex surgical eye interventions are necessary and must be performed. Normally, such a decision support system consists of a knowledge-based module, which is responsible for the real assistance power, given by an explanation and logical reasoning processes. The interview based acquisition and generation of the complex knowledge itself is very crucial, because there are different correlations between the complex parameters. So, in this project (semi)automated self-learning methods are researched and developed for an enhancement of the quality of such a decision support system. Methods: For ophthalmic data sets of real patients in a hospital, advanced data mining procedures seem to be very helpful. Especially subgroup analysis methods are developed, extended and used to analyze and find out the correlations and conditional dependencies between the structured patient data. After finding causal dependencies, a ranking must be performed for the generation of rule-based representations. For this, anonymous patient data are transformed into a special machine language format. The imported data are used as input for algorithms of conditioned probability methods to calculate the parameter distributions concerning a special given goal parameter. Results: In the field of knowledge discovery advanced methods and applications could be performed to produce operation and patient related correlations. So, new knowledge was generated by finding causal relations between the operational equipment, the medical instances and patient specific history by a dependency ranking process. After transformation in association rules logically based representations were available for the clinical experts to evaluate the new knowledge. The structured data sets take account of about 80 parameters as special characteristic features per patient. For different extended patient groups (100, 300, 500), as well one target value as well multi-target values were set for the subgroup analysis. So the newly generated hypotheses could be interpreted regarding the dependency or independency of patient number. Conclusions: The aim and the advantage of such a semi-automatically self-learning process are the extensions of the knowledge base by finding new parameter correlations. The discovered knowledge is transformed into association rules and serves as rule-based representation of the knowledge in the knowledge base. Even more, than one goal parameter of interest can be considered by the semi-automated learning process. With ranking procedures, the most strong premises and also conjunctive associated conditions can be found to conclude the interested goal parameter. So the knowledge, hidden in structured tables or lists can be extracted as rule-based representation. This is a real assistance power for the communication with the clinical experts.

Keywords: an expert system, knowledge-based support, ophthalmic decision support, self-learning methods

Procedia PDF Downloads 239
3212 Assessment of Multi-Domain Energy Systems Modelling Methods

Authors: M. Stewart, Ameer Al-Khaykan, J. M. Counsell

Abstract:

Emissions are a consequence of electricity generation. A major option for low carbon generation, local energy systems featuring Combined Heat and Power with solar PV (CHPV) has significant potential to increase energy performance, increase resilience, and offer greater control of local energy prices while complementing the UK’s emissions standards and targets. Recent advances in dynamic modelling and simulation of buildings and clusters of buildings using the IDEAS framework have successfully validated a novel multi-vector (simultaneous control of both heat and electricity) approach to integrating the wide range of primary and secondary plant typical of local energy systems designs including CHP, solar PV, gas boilers, absorption chillers and thermal energy storage, and associated electrical and hot water networks, all operating under a single unified control strategy. Results from this work indicate through simulation that integrated control of thermal storage can have a pivotal role in optimizing system performance well beyond the present expectations. Environmental impact analysis and reporting of all energy systems including CHPV LES presently employ a static annual average carbon emissions intensity for grid supplied electricity. This paper focuses on establishing and validating CHPV environmental performance against conventional emissions values and assessment benchmarks to analyze emissions performance without and with an active thermal store in a notional group of non-domestic buildings. Results of this analysis are presented and discussed in context of performance validation and quantifying the reduced environmental impact of CHPV systems with active energy storage in comparison with conventional LES designs.

Keywords: CHPV, thermal storage, control, dynamic simulation

Procedia PDF Downloads 222
3211 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 114
3210 Teachers of English for Accounting Purpose: Self-Identity and Self-Reflectivity

Authors: Nanis Setyorini

Abstract:

This is an interpretive study that aims to explore English teachers’ self-identity and self-reflection on teaching of English for accounting purpose in Indonesian accounting schools. Pierre Bourdieu’s concepts of capitals, habitus, and field are applied to capture and analyze the outright feelings, dilemma, and efforts of how English teachers see their educational background and adjust their understanding of English teaching for specific purpose, how they deliver unrecognized materials about accountancy, how they build confidence in teaching accountancy experts, and how to develop their professional commitment as English teachers for accounting purpose. Therefore, semi-structured interviews and focus group discussions are conducted to 16 English teachers in accounting schools within five state and private universities in East Java, Indonesia. The appropriateness of English teachers for accounting students remains a debatable topic. Previous literatures assume that the best English teachers for accounting students should be those who can demonstrate good quality use of English as well as those who have sound accounting knowledge and experience; however, such teachers are rare to find. Most English teachers in Indonesian accounting schools generally graduate from English education or English literature that provide a very limited pedagogic theories and practices of English for specific purpose (ESP). As a result, ESP teachers often had misconception and loss of face when they deliver subject contents to their accounting students who sometimes have been employed as professional accountants. The teachers also face a dilemma in locating themselves as the insiders in English knowledge, but the outsiders in accounting field. These situations are generally problems in their early-stage of teaching due to the lack of ESP knowledge, the shortage of teaching preparation, the absence of ESP in-house trainings on English for accountancy, and the unconducive relations with accounting educators as well as other ESP teachers. Then, self-learning with various resources and strategies is said as their effort to develop their teaching competence so they are able to teach English for accounting students more effectively.

Keywords: ESP teacher, English for accounting, self-identity, self-reflectivity

Procedia PDF Downloads 383
3209 Enhancing the Stability of Vietnamese Power System - from Theory to Practical

Authors: Edwin Lerch, Dirk Audring, Cuong Nguyen Mau, Duc Ninh Nguyen, The Cuong Nguyen, The Van Nguyen

Abstract:

The National Load Dispatch Centre of Electricity Vietnam (EVNNLDC) and Siemens PTI investigated the stability of the electrical 500/220 kV transportation system of Vietnam. The general scope of the investigations is improving the stability of the Vietnam power system and giving the EVNNLDC staff the capability to decide how to deal with expected stability challenges in the future, which are related to the very fast growth of the system. Rapid system growth leads to a very high demand of power transmission from North to South. This was investigated by stability studies of interconnected power system with neighboring countries. These investigations are performed in close cooperation and coordination with the EVNNLDC project team. This important project includes data collection, measurement, model validation and investigation of relevant stability phenomena as well as training of the EVNNLDC staff. Generally, the power system of Vietnam has good voltage and dynamic stability. The main problems are related to the longitudinal system with more power generation in the North and Center, especially hydro power, and load centers in the South of Vietnam. Faults on the power transmission system from North to South risks the stability of the entire system due to a high power transfer from North to South and high loading of the 500 kV backbone. An additional problem is the weak connection to Cambodia power system which leads to interarea oscillations mode. Therefore, strengthening the power transfer capability by new 500kV lines or HVDC connection and balancing the power generation across the country will solve many challenges. Other countermeasures, such as wide area load shedding, PSS tuning and correct SVC placement will improve and stabilize the power system as well. Primary frequency reserve should be increased.

Keywords: dynamic power transmission system studies, blackout prevention, power system interconnection, stability

Procedia PDF Downloads 335
3208 Numerical Modeling the Cavitating Flow in Injection Nozzle Holes

Authors: Ridha Zgolli, Hatem Kanfoudi

Abstract:

Cavitating flows inside a diesel injection nozzle hole were simulated using a mixture model. A 2D numerical model is proposed in this paper to simulate steady cavitating flows. The Reynolds-averaged Navier-Stokes equations are solved for the liquid and vapor mixture, which is considered as a single fluid with variable density which is expressed as function of the vapor volume fraction. The closure of this variable is provided by the transport equation with a source term TEM. The processes of evaporation and condensation are governed by changes in pressure within the flow. The source term is implanted in the CFD code ANSYS CFX. The influence of numerical and physical parameters is presented in details. The numerical simulations are in good agreement with the experimental data for steady flow.

Keywords: cavitation, injection nozzle, numerical simulation, k–ω

Procedia PDF Downloads 382
3207 STML: Service Type-Checking Markup Language for Services of Web Components

Authors: Saqib Rasool, Adnan N. Mian

Abstract:

Web components are introduced as the latest standard of HTML5 for writing modular web interfaces for ensuring maintainability through the isolated scope of web components. Reusability can also be achieved by sharing plug-and-play web components that can be used as off-the-shelf components by other developers. A web component encapsulates all the required HTML, CSS and JavaScript code as a standalone package which must be imported for integrating a web component within an existing web interface. It is then followed by the integration of web component with the web services for dynamically populating its content. Since web components are reusable as off-the-shelf components, these must be equipped with some mechanism for ensuring their proper integration with web services. The consistency of a service behavior can be verified through type-checking. This is one of the popular solutions for improving the quality of code in many programming languages. However, HTML does not provide type checking as it is a markup language and not a programming language. The contribution of this work is to introduce a new extension of HTML called Service Type-checking Markup Language (STML) for adding support of type checking in HTML for JSON based REST services. STML can be used for defining the expected data types of response from JSON based REST services which will be used for populating the content within HTML elements of a web component. Although JSON has five data types viz. string, number, boolean, object and array but STML is made to supports only string, number and object. This is because of the fact that both object and array are considered as string, when populated in HTML elements. In order to define the data type of any HTML element, developer just needs to add the custom STML attributes of st-string, st-number and st-boolean for string, number and boolean respectively. These all annotations of STML are used by the developer who is writing a web component and it enables the other developers to use automated type-checking for ensuring the proper integration of their REST services with the same web component. Two utilities have been written for developers who are using STML based web components. One of these utilities is used for automated type-checking during the development phase. It uses the browser console for showing the error description if integrated web service is not returning the response with expected data type. The other utility is a Gulp based command line utility for removing the STML attributes before going in production. This ensures the delivery of STML free web pages in the production environment. Both of these utilities have been tested to perform type checking of REST services through STML based web components and results have confirmed the feasibility of evaluating service behavior only through HTML. Currently, STML is designed for automated type-checking of integrated REST services but it can be extended to introduce a complete service testing suite based on HTML only, and it will transform STML from Service Type-checking Markup Language to Service Testing Markup Language.

Keywords: REST, STML, type checking, web component

Procedia PDF Downloads 236
3206 Cavitating Flow through a Venturi Using Computational Fluid Dynamics

Authors: Imane Benghalia, Mohammed Zamoum, Rachid Boucetta

Abstract:

Hydrodynamic cavitation is a complex physical phenomenon that appears in hydraulic systems (pumps, turbines, valves, Venturi tubes, etc.) when the fluid pressure decreases below the saturated vapor pressure. The works carried out in this study aimed to get a better understanding of the cavitating flow phenomena. For this, we have numerically studied a cavitating bubbly flow through a Venturi nozzle. The cavitation model is selected and solved using a commercial computational fluid dynamics (CFD) code. The obtained results show the effect of the inlet pressure (10, 7, 5, and 2 bars) of the Venturi on pressure, the velocity of the fluid flow, and the vapor fraction. We found that the inlet pressure of the Venturi strongly affects the evolution of the pressure, velocity, and vapor fraction formation in the cavitating flow.

Keywords: cavitating flow, CFD, phase change, venturi

Procedia PDF Downloads 70
3205 Computational Investigation of Gas-Solid Flow in High Pressure High Temperature Filter

Authors: M. H. Alhajeri, Hamad M. Alhajeri, A. H. Alenezi

Abstract:

This paper reports a Computational Fluid Dynamics (CFD) investigation for a high-temperature high-pressure filtration (ceramic candle filter). However, parallel flow to the filter is considered in this study. Different face (filtration) velocities are examined using the CFD code, FLUENT. Different sizes of particles are tracked through the domain to find the height at which the particles will impinge on the filter surface. Furthermore, particle distribution around the filter (or filter cake) is studied to design efficient cleaning mechanisms. Gravity effect to the particles with various inlet velocities and pressure drop are both considered. In the CFD study, it is found that the gravity influence should not be ignored if the particle sizes exceed 1 micron.

Keywords: fluid flow, CFD, filtration, HTHP

Procedia PDF Downloads 191
3204 Power Generation and Treatment potential of Microbial Fuel Cell (MFC) from Landfill Leachate

Authors: Beenish Saba, Ann D. Christy

Abstract:

Modern day municipal solid waste landfills are operated and controlled to protect the environment from contaminants during the biological stabilization and degradation of the solid waste. They are equipped with liners, caps, gas and leachate collection systems. Landfill gas is passively or actively collected and can be used as bio fuel after necessary purification, but leachate treatment is the more difficult challenge. Leachate, if not recirculated in a bioreactor landfill system, is typically transported to a local wastewater treatment plant for treatment. These plants are designed for sewage treatment, and often charge additional fees for higher strength wastewaters such as leachate if they accept them at all. Different biological, chemical, physical and integrated techniques can be used to treat the leachate. Treating that leachate with simultaneous power production using microbial fuel cells (MFC) technology has been a recent innovation, reported its application in its earliest starting phase. High chemical oxygen demand (COD), ionic strength and salt concentration are some of the characteristics which make leachate an excellent substrate for power production in MFCs. Different materials of electrodes, microbial communities, carbon co-substrates and temperature conditions are some factors that can be optimized to achieve simultaneous power production and treatment. The advantage of the MFC is its dual functionality but lower power production and high costs are the hurdles in its commercialization and more widespread application. The studies so far suggest that landfill leachate MFCs can produce 1.8 mW/m2 with 79% COD removal, while amendment with food leachate or domestic wastewater can increase performance up to 18W/m3 with 90% COD removal. The columbic efficiency is reported to vary between 2-60%. However efforts towards biofilm optimization, efficient electron transport system studies and use of genetic tools can increase the efficiency of the MFC and can determine its future potential in treating landfill leachate.

Keywords: microbial fuel cell, landfill leachate, power generation, MFC

Procedia PDF Downloads 300
3203 A Feasibility and Implementation Model of Small-Scale Hydropower Development for Rural Electrification in South Africa: Design Chart Development

Authors: Gideon J. Bonthuys, Marco van Dijk, Jay N. Bhagwan

Abstract:

Small scale hydropower used to play a very important role in the provision of energy to urban and rural areas of South Africa. The national electricity grid, however, expanded and offered cheap, coal generated electricity and a large number of hydropower systems were decommissioned. Unfortunately, large numbers of households and communities will not be connected to the national electricity grid for the foreseeable future due to high cost of transmission and distribution systems to remote communities due to the relatively low electricity demand within rural communities and the allocation of current expenditure on upgrading and constructing of new coal fired power stations. This necessitates the development of feasible alternative power generation technologies. A feasibility and implementation model was developed to assist in designing and financially evaluating small-scale hydropower (SSHP) plants. Several sites were identified using the model. The SSHP plants were designed for the selected sites and the designs for the different selected sites were priced using pricing models (civil, mechanical and electrical aspects). Following feasibility studies done on the designed and priced SSHP plants, a feasibility analysis was done and a design chart developed for future similar potential SSHP plant projects. The methodology followed in conducting the feasibility analysis for other potential sites consisted of developing cost and income/saving formulae, developing net present value (NPV) formulae, Capital Cost Comparison Ratio (CCCR) and levelised cost formulae for SSHP projects for the different types of plant installations. It included setting up a model for the development of a design chart for a SSHP, calculating the NPV, CCCR and levelised cost for the different scenarios within the model by varying different parameters within the developed formulae, setting up the design chart for the different scenarios within the model and analyzing and interpreting results. From the interpretation of the develop design charts for feasible SSHP in can be seen that turbine and distribution line cost are the major influences on the cost and feasibility of SSHP. High head, short transmission line and islanded mini-grid SSHP installations are the most feasible and that the levelised cost of SSHP is high for low power generation sites. The main conclusion from the study is that the levelised cost of SSHP projects indicate that the cost of SSHP for low energy generation is high compared to the levelised cost of grid connected electricity supply; however, the remoteness of SSHP for rural electrification and the cost of infrastructure to connect remote rural communities to the local or national electricity grid provides a low CCCR and renders SSHP for rural electrification feasible on this basis.

Keywords: cost, feasibility, rural electrification, small-scale hydropower

Procedia PDF Downloads 207
3202 The Digital Divide: Examining the Use and Access to E-Health Based Technologies by Millennials and Older Adults

Authors: Delana Theiventhiran, Wally J. Bartfay

Abstract:

Background and Significance: As the Internet is becoming the epitome of modern communications, there are many pragmatic reasons why the digital divide matters in terms of accessing and using E-health based technologies. With the rise of technology usage globally, those in the older adult generation may not be as familiar and comfortable with technology usage and are thus put at a disadvantage compared to other generations such as millennials when examining and using E-health based platforms and technology. Currently, little is known about how older adults and millennials access and use e-health based technologies. Methods: A systemic review of the literature was undertaken employing the following three databases: (i) PubMed, (ii) ERIC, and (iii) CINAHL; employing the search term 'digital divide and generations' to identify potential articles. To extract required data from the studies, a data abstraction tool was created to obtain the following information: (a) author, (b) year of publication, (c) sample size, (d) country of origin, (e) design/methods, (f) major findings/outcomes obtained. Inclusion criteria included publication dates between the years of Jan 2009 to Aug 2018, written in the English language, target populations of older adults aged 65 and above and millennials, and peer reviewed quantitative studies only. Major Findings: PubMed provided 505 potential articles, where 23 of those articles met the inclusion criteria. Specifically, ERIC provided 53 potential articles, where no articles met criteria following data extraction. CINAHL provided 14 potential articles, where eight articles met criteria following data extraction. Conclusion: Practically speaking, identifying how newer E-health based technologies can be integrated into society and identifying why there is a gap with digital technology will help reduce the impact on generations and individuals who are not as familiar with technology and Internet usage. The largest concern of all is how to prepare older adults for new and emerging E-health technologies. Currently, there is a dearth of literature in this area because it is a newer area of research and little is known about it. The benefits and consequences of technology being integrated into daily living are being investigated as a newer area of research. Several of the articles (N=11) indicated that age is one of the larger factors contributing to the digital divide. Similarly, many of the examined articles (N=5) identify that privacy concerns were one of the main deterrents of technology usage for elderly individuals aged 65 and above. The older adult generation feels that privacy is one of the major concerns, especially in regards to how data is collected, used and possibly sold to third party groups by various websites. Additionally, access to technology, the Internet, and infrastructure also plays a large part in the way that individuals are able to receive and use information. Lastly, a change in the way that healthcare is currently used, received and distributed would also help attribute to the change to ensure that no generation is left behind in a technologically advanced society.

Keywords: digital divide, e-health, millennials, older adults

Procedia PDF Downloads 154
3201 Reinforced Concrete Design Construction Issues and Earthquake Failure-Damage Responses

Authors: Hasan Husnu Korkmaz, Serra Zerrin Korkmaz

Abstract:

Earthquakes are the natural disasters that threat several countries. Turkey is situated on a very active earthquake zone. During the recent earthquakes, thousands of people died due to failure of reinforced concrete structures. Although Turkey has a very sufficient earthquake code, the design and construction mistakes were repeated for old structures. Lack of the control mechanism during the construction process may be the most important reason of failure. The quality of the concrete and poor detailing of steel or reinforcement is the most important headings. In this paper, the reasons of failure of reinforced concrete structures were summarized with relevant photos. The paper is beneficial for civil engineers as well as architect who are in the process of construction and design of structures in earthquake zones.

Keywords: earthquake, reinforced concrete structure, failure, material

Procedia PDF Downloads 341
3200 Capacity of Cold-Formed Steel Warping-Restrained Members Subjected to Combined Axial Compressive Load and Bending

Authors: Maryam Hasanali, Syed Mohammad Mojtabaei, Iman Hajirasouliha, G. Charles Clifton, James B. P. Lim

Abstract:

Cold-formed steel (CFS) elements are increasingly being used as main load-bearing components in the modern construction industry, including low- to mid-rise buildings. In typical multi-storey buildings, CFS structural members act as beam-column elements since they are exposed to combined axial compression and bending actions, both in moment-resisting frames and stud wall systems. Current design specifications, including the American Iron and Steel Institute (AISI S100) and the Australian/New Zealand Standard (AS/NZS 4600), neglect the beneficial effects of warping-restrained boundary conditions in the design of beam-column elements. Furthermore, while a non-linear relationship governs the interaction of axial compression and bending, the combined effect of these actions is taken into account through a simplified linear expression combining pure axial and flexural strengths. This paper aims to evaluate the reliability of the well-known Direct Strength Method (DSM) as well as design proposals found in the literature to provide a better understanding of the efficiency of the code-prescribed linear interaction equation in the strength predictions of CFS beam columns and the effects of warping-restrained boundary conditions on their behavior. To this end, the experimentally validated finite element (FE) models of CFS elements under compression and bending were developed in ABAQUS software, which accounts for both non-linear material properties and geometric imperfections. The validated models were then used for a comprehensive parametric study containing 270 FE models, covering a wide range of key design parameters, such as length (i.e., 0.5, 1.5, and 3 m), thickness (i.e., 1, 2, and 4 mm) and cross-sectional dimensions under ten different load eccentricity levels. The results of this parametric study demonstrated that using the DSM led to the most conservative strength predictions for beam-column members by up to 55%, depending on the element’s length and thickness. This can be sourced by the errors associated with (i) the absence of warping-restrained boundary condition effects, (ii) equations for the calculations of buckling loads, and (iii) the linear interaction equation. While the influence of warping restraint is generally less than 6%, the code suggested interaction equation led to an average error of 4% to 22%, based on the element lengths. This paper highlights the need to provide more reliable design solutions for CFS beam-column elements for practical design purposes.

Keywords: beam-columns, cold-formed steel, finite element model, interaction equation, warping-restrained boundary conditions

Procedia PDF Downloads 84
3199 Modeling of Foundation-Soil Interaction Problem by Using Reduced Soil Shear Modulus

Authors: Yesim Tumsek, Erkan Celebi

Abstract:

In order to simulate the infinite soil medium for soil-foundation interaction problem, the essential geotechnical parameter on which the foundation stiffness depends, is the value of soil shear modulus. This parameter directly affects the site and structural response of the considered model under earthquake ground motions. Strain-dependent shear modulus under cycling loads makes difficult to estimate the accurate value in computation of foundation stiffness for the successful dynamic soil-structure interaction analysis. The aim of this study is to discuss in detail how to use the appropriate value of soil shear modulus in the computational analyses and to evaluate the effect of the variation in shear modulus with strain on the impedance functions used in the sub-structure method for idealizing the soil-foundation interaction problem. Herein, the impedance functions compose of springs and dashpots to represent the frequency-dependent stiffness and damping characteristics at the soil-foundation interface. Earthquake-induced vibration energy is dissipated into soil by both radiation and hysteretic damping. Therefore, flexible-base system damping, as well as the variability in shear strengths, should be considered in the calculation of impedance functions for achievement a more realistic dynamic soil-foundation interaction model. In this study, it has been written a Matlab code for addressing these purposes. The case-study example chosen for the analysis is considered as a 4-story reinforced concrete building structure located in Istanbul consisting of shear walls and moment resisting frames with a total height of 12m from the basement level. The foundation system composes of two different sized strip footings on clayey soil with different plasticity (Herein, PI=13 and 16). In the first stage of this study, the shear modulus reduction factor was not considered in the MATLAB algorithm. The static stiffness, dynamic stiffness modifiers and embedment correction factors of two rigid rectangular foundations measuring 2m wide by 17m long below the moment frames and 7m wide by 17m long below the shear walls are obtained for translation and rocking vibrational modes. Afterwards, the dynamic impedance functions of those have been calculated for reduced shear modulus through the developed Matlab code. The embedment effect of the foundation is also considered in these analyses. It can easy to see from the analysis results that the strain induced in soil will depend on the extent of the earthquake demand. It is clearly observed that when the strain range increases, the dynamic stiffness of the foundation medium decreases dramatically. The overall response of the structure can be affected considerably because of the degradation in soil stiffness even for a moderate earthquake. Therefore, it is very important to arrive at the corrected dynamic shear modulus for earthquake analysis including soil-structure interaction.

Keywords: clay soil, impedance functions, soil-foundation interaction, sub-structure approach, reduced shear modulus

Procedia PDF Downloads 253
3198 Insights into Archaeological Human Sample Microbiome Using 16S rRNA Gene Sequencing

Authors: Alisa Kazarina, Guntis Gerhards, Elina Petersone-Gordina, Ilva Pole, Viktorija Igumnova, Janis Kimsis, Valentina Capligina, Renate Ranka

Abstract:

Human body is inhabited by a vast number of microorganisms, collectively known as the human microbiome, and there is a tremendous interest in evolutionary changes in human microbial ecology, diversity and function. The field of paleomicrobiology, study of ancient human microbiome, is powered by modern techniques of Next Generation Sequencing (NGS), which allows extracting microbial genomic data directly from archaeological sample of interest. One of the major techniques is 16S rRNA gene sequencing, by which certain 16S rRNA gene hypervariable regions are being amplified and sequenced. However, some limitations of this method exist including the taxonomic precision and efficacy of different regions used. The aim of this study was to evaluate the phylogenetic sensitivity of different 16S rRNA gene hypervariable regions for microbiome studies in the archaeological samples. Towards this aim, archaeological bone samples and corresponding soil samples from each burial environment were collected in Medieval cemeteries in Latvia. The Ion 16S™ Metagenomics Kit targeting different 16S rRNA gene hypervariable regions was used for library construction (Ion Torrent technologies). Sequenced data were analysed by using appropriate bioinformatic techniques; alignment and taxonomic representation was done using Mothur program. Sequences of most abundant genus were further aligned to E. coli 16S rRNA gene reference sequence using MEGA7 in order to identify the hypervariable region of the segment of interest. Our results showed that different hypervariable regions had different discriminatory power depending on the groups of microbes, as well as the nature of samples. On the basis of our results, we suggest that wider range of primers used can provide more accurate recapitulation of microbial communities in archaeological samples. Acknowledgements. This work was supported by the ERAF grant Nr. 1.1.1.1/16/A/101.

Keywords: 16S rRNA gene, ancient human microbiome, archaeology, bioinformatics, genomics, microbiome, molecular biology, next-generation sequencing

Procedia PDF Downloads 174
3197 Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment

Authors: Mark Joseph Quinto, Roan Beronilla, Guiller Damian, Eliza Camaso, Ronaldo Alberto

Abstract:

Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree’s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS.

Keywords: carbon stock, forest inventory, LiDAR, tree count

Procedia PDF Downloads 362