Search results for: artificial agency
558 A Conceptual Framework of Integrated Evaluation Methodology for Aquaculture Lakes
Authors: Robby Y. Tallar, Nikodemus L., Yuri S., Jian P. Suen
Abstract:
Research in the subject of ecological water resources management is full of trivial questions addressed and it seems, today to be one branch of science that can strongly contribute to the study of complexity (physical, biological, ecological, socio-economic, environmental, and other aspects). Existing literature available on different facets of these studies, much of it is technical and targeted for specific users. This study offered the combination all aspects in evaluation methodology for aquaculture lakes with its paradigm refer to hierarchical theory and to the effects of spatial specific arrangement of an object into a space or local area. Therefore, the process in developing a conceptual framework represents the more integrated and related applicable concept from the grounded theory. A design of integrated evaluation methodology for aquaculture lakes is presented. The method is based on the identification of a series of attributes which can be used to describe status of aquaculture lakes using certain indicators from aquaculture water quality index (AWQI), aesthetic aquaculture lake index (AALI) and rapid appraisal for fisheries index (RAPFISH). The preliminary preparation could be accomplished as follows: first, the characterization of study area was undertaken at different spatial scales. Second, an inventory data as a core resource such as city master plan, water quality reports from environmental agency, and related government regulations. Third, ground-checking survey should be completed to validate the on-site condition of study area. In order to design an integrated evaluation methodology for aquaculture lakes, finally we integrated and developed rating scores system which called Integrated Aquaculture Lake Index (IALI).The development of IALI are reflecting a compromise all aspects and it responds the needs of concise information about the current status of aquaculture lakes by the comprehensive approach. IALI was elaborated as a decision aid tool for stakeholders to evaluate the impact and contribution of anthropogenic activities on the aquaculture lake’s environment. The conclusion was while there is no denying the fact that the aquaculture lakes are under great threat from the pressure of the increasing human activities, one must realize that no evaluation methodology for aquaculture lakes can succeed by keeping the pristine condition. The IALI developed in this work can be used as an effective, low-cost evaluation methodology of aquaculture lakes for developing countries. Because IALI emphasizes the simplicity and understandability as it must communicate to decision makers and the experts. Moreover, stakeholders need to be helped to perceive their lakes so that sites can be accepted and valued by local people. For this site of lake development, accessibility and planning designation of the site is of decisive importance: the local people want to know whether the lake condition is safe or whether it can be used.Keywords: aesthetic value, AHP, aquaculture lakes, integrated lakes, RAPFISH
Procedia PDF Downloads 237557 Prompt Design for Code Generation in Data Analysis Using Large Language Models
Authors: Lu Song Ma Li Zhi
Abstract:
With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications.Keywords: large language models, prompt design, data analysis, code generation
Procedia PDF Downloads 39556 Non-Invasive Data Extraction from Machine Display Units Using Video Analytics
Authors: Ravneet Kaur, Joydeep Acharya, Sudhanshu Gaur
Abstract:
Artificial Intelligence (AI) has the potential to transform manufacturing by improving shop floor processes such as production, maintenance and quality. However, industrial datasets are notoriously difficult to extract in a real-time, streaming fashion thus, negating potential AI benefits. The main example is some specialized industrial controllers that are operated by custom software which complicates the process of connecting them to an Information Technology (IT) based data acquisition network. Security concerns may also limit direct physical access to these controllers for data acquisition. To connect the Operational Technology (OT) data stored in these controllers to an AI application in a secure, reliable and available way, we propose a novel Industrial IoT (IIoT) solution in this paper. In this solution, we demonstrate how video cameras can be installed in a factory shop floor to continuously obtain images of the controller HMIs. We propose image pre-processing to segment the HMI into regions of streaming data and regions of fixed meta-data. We then evaluate the performance of multiple Optical Character Recognition (OCR) technologies such as Tesseract and Google vision to recognize the streaming data and test it for typical factory HMIs and realistic lighting conditions. Finally, we use the meta-data to match the OCR output with the temporal, domain-dependent context of the data to improve the accuracy of the output. Our IIoT solution enables reliable and efficient data extraction which will improve the performance of subsequent AI applications.Keywords: human machine interface, industrial internet of things, internet of things, optical character recognition, video analytics
Procedia PDF Downloads 109555 Historical Analysis of the Landscape Changes and the Eco-Environment Effects on the Coastal Zone of Bohai Bay, China
Authors: Juan Zhou, Lusan Liu, Yanzhong Zhu, Kuixuan Lin, Wenqian Cai, Yu Wang, Xing Wang
Abstract:
During the past few decades, there has been an increase in the number of coastal land reclamation projects for residential, commercial and industrial purposes in more and more coastal cities of China, which led to the destruction of the wetlands and loss of the sensitive marine habitats. Meanwhile, the influences and nature of these projects attract widespread public and academic concern. For identifying the trend of landscape (esp. Coastal reclamation) and ecological environment changes, understanding of which interacted, and offering a general science for the development of regional plans. In the paper, a case study was carried out in Bohai Bay area, based on the analysis of remote sensing data. Land use maps were created for 1954, 1970, 1981, 1990, 2000 and 2010. Landscape metrics were calculated and illustrated that the degree of reclamation changes was linked to the hydrodynamic environment and macrobenthos community. The results indicated that the worst of the loss of initial areas occurred during 1954-1970, with 65.6% lost mostly to salt field; to 2010, Coastal reclamation area increased more than 200km² as artificial landscape. The numerical simulation of tidal current field in 2003 and 2010 respectively showed that the flow velocity in offshore became faster (from 2-5 cm/s to 10-20 cm/s), and the flow direction seem to go astray. These significant changes of coastline were not conducive to the spread of pollutants and degradation. Additionally, the dominant macrobenthos analysis from 1958 to 2012 showed that Musculus senhousei (Benson, 1842) spread very fast and had been the predominant species in the recent years, which was a disturbance tolerant species.Keywords: Bohai Bay, coastal reclamation, landscape change, spatial patterns
Procedia PDF Downloads 290554 Recovery of Food Waste: Production of Dog Food
Authors: K. Nazan Turhan, Tuğçe Ersan
Abstract:
The population of the world is approximately 8 billion, and it increases uncontrollably and irrepressibly, leading to an increase in consumption. This situation causes crucial problems, and food waste is one of these. The Food and Agriculture Organization of the United Nations (FAO) defines food waste as the discarding or alternative utilization of food that is safe and nutritious for the consumption of humans along the entire food supply chain, from primary production to end household consumer level. In addition, according to the estimation of FAO, one-third of all food produced for human consumption is lost or wasted worldwide every year. Wasting food endangers natural resources and causes hunger. For instance, excessive amounts of food waste cause greenhouse gas emissions, contributing to global warming. Therefore, waste management has been gaining significance in the last few decades at both local and global levels due to the expected scarcity of resources for the increasing population of the world. There are several ways to recover food waste. According to the United States Environmental Protection Agency’s Food Recovery Hierarchy, food waste recovery ways are source reduction, feeding hungry people, feeding animals, industrial uses, composting, and landfill/incineration from the most preferred to the least preferred, respectively. Bioethanol, biodiesel, biogas, agricultural fertilizer and animal feed can be obtained from food waste that is generated by different food industries. In this project, feeding animals was selected as a food waste recovery method and food waste of a plant was used to provide ingredient uniformity. Grasshoppers were used as a protein source. In other words, the project was performed to develop a dog food product by recovery of the plant’s food waste after following some steps. The collected food waste and purchased grasshoppers were sterilized, dried and pulverized. Then, they were all mixed with 60 g agar-agar solution (4%w/v). 3 different aromas were added, separately to the samples to enhance flavour quality. Since there are differences in the required amounts of different species of dogs, fulfilling all nutritional needs is one of the problems. In other words, there is a wide range of nutritional needs in terms of carbohydrates, protein, fat, sodium, calcium, and so on. Furthermore, the requirements differ depending on age, gender, weight, height, and species. Therefore, the product that was developed contains average amounts of each substance so as not to cause any deficiency or surplus. On the other hand, it contains more protein than similar products in the market. The product was evaluated in terms of contamination and nutritional content. For contamination risk, detection of E. coli and Salmonella experiments were performed, and the results were negative. For the nutritional value test, protein content analysis was done. The protein contents of different samples vary between 33.68% and 26.07%. In addition, water activity analysis was performed, and the water activity (aw) values of different samples ranged between 0.2456 and 0.4145.Keywords: food waste, dog food, animal nutrition, food waste recovery
Procedia PDF Downloads 63553 The Impact of Artificial Intelligence on Pharmacy and Pharmacology
Authors: Mamdouh Milad Adly Morkos
Abstract:
Despite having the greatest rates of mortality and morbidity in the world, low- and middle-income (LMIC) nations trail high-income nations in terms of the number of clinical trials, the number of qualified researchers, and the amount of research information specific to their people. Health inequities and the use of precision medicine may be hampered by a lack of local genomic data, clinical pharmacology and pharmacometrics competence, and training opportunities. These issues can be solved by carrying out health care infrastructure development, which includes data gathering and well-designed clinical pharmacology training in LMICs. It will be advantageous if there is international cooperation focused at enhancing education and infrastructure and promoting locally motivated clinical trials and research. This paper outlines various instances where clinical pharmacology knowledge could be put to use, including pharmacogenomic opportunities that could lead to better clinical guideline recommendations. Examples of how clinical pharmacology training can be successfully implemented in LMICs are also provided, including clinical pharmacology and pharmacometrics training programmes in Africa and a Tanzanian researcher's personal experience while on a training sabbatical in the United States. These training initiatives will profit from advocacy for clinical pharmacologists' employment prospects and career development pathways, which are gradually becoming acknowledged and established in LMICs. The advancement of training and research infrastructure to increase clinical pharmacologists' knowledge in LMICs would be extremely beneficial because they have a significant role to play in global healthKeywords: electromagnetic solar system, nano-material, nano pharmacology, pharmacovigilance, quantum theoryclinical simulation, education, pharmacology, simulation, virtual learning low- and middle-income, clinical pharmacology, pharmacometrics, career development pathways
Procedia PDF Downloads 81552 Technology Management for Early Stage Technologies
Authors: Ming Zhou, Taeho Park
Abstract:
Early stage technologies have been particularly challenging to manage due to high degrees of their numerous uncertainties. Most research results directly out of a research lab tend to be at their early, if not the infant stage. A long while uncertain commercialization process awaits these lab results. The majority of such lab technologies go nowhere and never get commercialized due to various reasons. Any efforts or financial resources put into managing these technologies turn fruitless. High stake naturally calls for better results, which make a patenting decision harder to make. A good and well protected patent goes a long way for commercialization of the technology. Our preliminary research showed that there was not a simple yet productive procedure for such valuation. Most of the studies now have been theoretical and overly comprehensive where practical suggestions were non-existent. Hence, we attempted to develop a simple and highly implementable procedure for efficient and scalable valuation. We thoroughly reviewed existing research, interviewed practitioners in the Silicon Valley area, and surveyed university technology offices. Instead of presenting another theoretical and exhaustive research, we aimed at developing a practical guidance that a government agency and/or university office could easily deploy and get things moving to later steps of managing early stage technologies. We provided a procedure to thriftily value and make the patenting decision. A patenting index was developed using survey data and expert opinions. We identified the most important factors to be used in the patenting decision using survey ratings. The rating then assisted us in generating good relative weights for the later scoring and weighted averaging step. More importantly, we validated our procedure by testing it with our practitioner contacts. Their inputs produced a general yet highly practical cut schedule. Such schedule of realistic practices has yet to be witnessed our current research. Although a technology office may choose to deviate from our cuts, what we offered here at least provided a simple and meaningful starting point. This procedure was welcomed by practitioners in our expert panel and university officers in our interview group. This research contributed to our current understanding and practices of managing early stage technologies by instating a heuristically simple yet theoretical solid method for the patenting decision. Our findings generated top decision factors, decision processes and decision thresholds of key parameters. This research offered a more practical perspective which further completed our extant knowledge. Our results could be impacted by our sample size and even biased a bit by our focus on the Silicon Valley area. Future research, blessed with bigger data size and more insights, may want to further train and validate our parameter values in order to obtain more consistent results and analyze our decision factors for different industries.Keywords: technology management, early stage technology, patent, decision
Procedia PDF Downloads 342551 Study on the Effect of Different Media on Green Roof Water Retention
Authors: Chen Zhi-Wei, Hsieh Wei-Fang
Abstract:
Taiwan annual rainfall is global average of 2.5 times, plus city excessive development, green constantly to reduced, instead of is big area of artificial base disc, makes Taiwan rainy season during occurred of storm cannot timely of emissions, led to flood constantly, and rain also cannot was retained again using, led to city hydrological balance suffered damage, and to Regulation city of by brings of negative effect, increased green covered rate became most effective of method, and city land limited, so roof green gradually became a alternative program. Green roofs have become one of the Central and local government policy initiatives for urban development, in foreign countries, such as the United States, and Japan, and Singapore etc. Development of roof greening as an important policy, has become a trend of the times. In recent years, many experts and scholars are also on the roof greening all aspects of research, mostly for green roof for the environmental impact of benefits, such as: carbon reduction, cooling, thermostat, but research on the benefits of green roofs under water cut but it is rare. Therefore, this research literature from green roof in to view and analyze what kind of medium suitable for roof greening and use of green base plate combination simulated green roof structure, via different proportions of the medium with water retention plate and drainage board, experiment with different planting base plate combination of water conservation performance. Research will want to test the effect of roof planting base mix, promotion of relevant departments and agencies in future implementation of green roofs, prompted the development of green roofs, which in the end Taiwan achieve sustainable development of the urban environment help.Keywords: thin-layer roof greening and planting medium, water efficiency
Procedia PDF Downloads 354550 Employers’ Preferences when Employing Solo Self-employed: a Vignette Study in the Netherlands
Authors: Lian Kösters, Wendy Smits, Raymond Montizaan
Abstract:
The number of solo self-employed in the Netherlands has been increasing for years. The relative increase is among the largest in the EU. To explain this increase, most studies have focused on the supply side, workers who offer themselves as solo self-employed. The number of studies that focus on the demand side, the employer who hires the solo self-employed, is still scarce. Studies into employer behaviour conducted until now show that employers mainly choose self-employed workers when they have a temporary need for specialist knowledge, but also during projects or production peaks. These studies do not provide insight into the employers’ considerations for different contract types. In this study, interviews with employers were conducted, and available literature was consulted to provide an overview of the several factors employers use to compare different contract types. That input was used to set up a vignette study. This was carried out at the end of 2021 among almost 1000 business owners, HR managers, and business leaders of Dutch companies. Each respondent was given two sets of five fictitious candidates for two possible positions in their organization. They were asked to rank these candidates. The positions varied with regard to the type of tasks (core tasks or support tasks) and the time it took to train new people for the position. The respondents were asked additional questions about the positions, such as the required level of education, the duration, and the degree of predictability of tasks. The fictitious candidates varied, among other things, in the type of contract on which they would come to work for the organization. The results were analyzed using a rank-ordered logit analysis. This vignette setup makes it possible to see which factors are most important for employers when choosing to hire a solo self-employed person compared to other contracts. The results show that there are no indications that employers would want to hire solo self-employed workers en masse. They prefer regular employee contracts. The probability of being chosen with a solo self-employed contract over someone who comes to work as a temporary employee is 32 percent. This probability is even lower than for on-call and temporary agency workers. For a permanent contract, this probability is 46 percent. The results provide indications that employers consider knowledge and skills more important than the solo self-employed contract and that this can compensate. A solo self-employed candidate with 10 years of work experience has a 63 percent probability of being found attractive by an employer compared to a temporary employee without work experience. This suggests that employers are willing to give someone a less attractive contract for the employer if the worker so wishes. The results also show that the probability that a solo self-employed person is preferred over a candidate with a temporary employee contract is somewhat higher in business economics, administrative and technical professions. No significant results were found for factors where it was expected that solo self-employed workers are preferred more often, such as for unpredictable or temporary work.Keywords: employer behaviour, rank-ordered logit analysis, solo self-employment, temporary contract, vignette study
Procedia PDF Downloads 73549 Investigation and Identification of a Number of Precious and Semi-precious Stones Related to Bam Historical Citadel Using Micro Raman Spectroscopy and Scanning Electron Microscopy (SEM/EDX)
Authors: Nazli Darkhal
Abstract:
The use of gems and ornaments has been common in Iran since the beginning of history. The prosperity of the country, the wealth, and the interest of the people of this land in luxurious and glorious life, combined with beauty, have always attracted the attention of the gems and ornaments of the Iranian people. Iranians are famous in the world for having a long history of collecting and recognizing precious stones. In this case, we can use the unique treasure of national jewelry. Raman spectroscopy method is one of the oscillating spectroscopy methods that is classified in the group of nondestructive study methods, and like other methods, in addition to several advantages, it also has disadvantages and problems. Micro Raman spectroscopy is one of the different types of Raman spectroscopy in which an optical microscope is combined with a Raman device to provide more capabilities and advantages than its original method. In this way, with the help of Raman spectroscopy and a light microscope, while observing more details from different parts of the historical sample, natural or artificial pigments can be identified in a small part of it. The EDX electron microscope also functions as the basis for the interaction of the electron beam with the matter. The beams emitted from this interaction can be used to examine samples. In this article, in addition to introducing the micro Raman spectroscopy method, studies have been conducted on the structure of three samples of existing stones in the historic citadel of Bam. Using this method of study on precious and semi-precious stones, in addition to requiring a short time, can provide us with complete information about the structure and theme of these samples. The results of experiments and gemology of the stones showed that the selected beads are agate and jasper, and they can be placed in the chalcedony group.Keywords: bam citadel, precious and semi-precious stones, Raman spectroscopy, scanning electron microscope
Procedia PDF Downloads 134548 Applications of Evolutionary Optimization Methods in Reinforcement Learning
Authors: Rahul Paul, Kedar Nath Das
Abstract:
The paradigm of Reinforcement Learning (RL) has become prominent in training intelligent agents to make decisions in environments that are both dynamic and uncertain. The primary objective of RL is to optimize the policy of an agent in order to maximize the cumulative reward it receives throughout a given period. Nevertheless, the process of optimization presents notable difficulties as a result of the inherent trade-off between exploration and exploitation, the presence of extensive state-action spaces, and the intricate nature of the dynamics involved. Evolutionary Optimization Methods (EOMs) have garnered considerable attention as a supplementary approach to tackle these challenges, providing distinct capabilities for optimizing RL policies and value functions. The ongoing advancement of research in both RL and EOMs presents an opportunity for significant advancements in autonomous decision-making systems. The convergence of these two fields has the potential to have a transformative impact on various domains of artificial intelligence (AI) applications. This article highlights the considerable influence of EOMs in enhancing the capabilities of RL. Taking advantage of evolutionary principles enables RL algorithms to effectively traverse extensive action spaces and discover optimal solutions within intricate environments. Moreover, this paper emphasizes the practical implementations of EOMs in the field of RL, specifically in areas such as robotic control, autonomous systems, inventory problems, and multi-agent scenarios. The article highlights the utilization of EOMs in facilitating RL agents to effectively adapt, evolve, and uncover proficient strategies for complex tasks that may pose challenges for conventional RL approaches.Keywords: machine learning, reinforcement learning, loss function, optimization techniques, evolutionary optimization methods
Procedia PDF Downloads 80547 Growth and Differentiation of Mesenchymal Stem Cells on Titanium Alloy Ti6Al4V and Novel Beta Titanium Alloy Ti36Nb6Ta
Authors: Eva Filová, Jana Daňková, Věra Sovková, Matej Daniel
Abstract:
Titanium alloys are biocompatible metals that are widely used in clinical practice as load bearing implants. The chemical modification may influence cell adhesion, proliferation, and differentiation as well as stiffness of the material. The aim of the study was to evaluate the adhesion, growth and differentiation of pig mesenchymal stem cells on the novel beta titanium alloy Ti36Nb6Ta compared to standard medical titanium alloy Ti6Al4V. Discs of Ti36Nb6Ta and Ti6Al4V alloy were sterilized by ethanol, put in 48-well plates, and seeded by pig mesenchymal stem cells at the density of 60×103/cm2 and cultured in Minimum essential medium (Sigma) supplemented with 10% fetal bovine serum and penicillin/streptomycin. Cell viability was evaluated using MTS assay (CellTiter 96® AQueous One Solution Cell Proliferation Assay;Promega), cell proliferation using Quant-iT™ ds DNA Assay Kit (Life Technologies). Cells were stained immunohistochemically using monoclonal antibody beta-actin, and secondary antibody conjugated with AlexaFluor®488 and subsequently the spread area of cells was measured. Cell differentiation was evaluated by alkaline phosphatase assay using p-nitrophenyl phosphate (pNPP) as a substrate; the reaction was stopped by NaOH, and the absorbance was measured at 405 nm. Osteocalcin, specific bone marker was stained immunohistochemically and subsequently visualized using confocal microscopy; the fluorescence intensity was analyzed and quantified. Moreover, gene expression of osteogenic markers osteocalcin and type I collagen was evaluated by real-time reverse transcription-PCR (qRT-PCR). For statistical evaluation, One-way ANOVA followed by Student-Newman-Keuls Method was used. For qRT-PCR, the nonparametric Kruskal-Wallis Test and Dunn's Multiple Comparison Test were used. The absorbance in MTS assay was significantly higher on titanium alloy Ti6Al4V compared to beta titanium alloy Ti36Nb6Ta on days 7 and 14. Mesenchymal stem cells were well spread on both alloys, but no difference in spread area was found. No differences in alkaline phosphatase assay, fluorescence intensity of osteocalcin as well as the expression of type I collagen, and osteocalcin genes were observed. Higher expression of type I collagen compared to osteocalcin was observed for cells on both alloys. Both beta titanium alloy Ti36Nb6Ta and titanium alloy Ti6Al4V Ti36Nb6Ta supported mesenchymal stem cellsˈ adhesion, proliferation and osteogenic differentiation. Novel beta titanium alloys Ti36Nb6Ta is a promising material for bone implantation. The project was supported by the Czech Science Foundation: grant No. 16-14758S, the Grant Agency of the Charles University, grant No. 1246314 and by the Ministry of Education, Youth and Sports NPU I: LO1309.Keywords: beta titanium, cell growth, mesenchymal stem cells, titanium alloy, implant
Procedia PDF Downloads 316546 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents
Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty
Abstract:
A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.Keywords: abstractive summarization, deep learning, natural language Processing, patent document
Procedia PDF Downloads 123545 The Sustainable Governance of Aquifer Injection Using Treated Coal Seam Gas Water in Queensland, Australia: Lessons for Integrated Water Resource Management
Authors: Jacqui Robertson
Abstract:
The sustainable governance of groundwater is of the utmost importance in an arid country like Australia. Groundwater has been relied on by our agricultural and pastoral communities since the State was settled by European colonialists. Nevertheless, the rapid establishment of a coal seam gas (CSG) industry in Queensland, Australia, has had extensive impacts on the pre-existing groundwater users. Managed aquifer recharge of important aquifers in Queensland, Australia, using treated coal seam gas produced water has been used to reduce the impacts of CSG development in Queensland Australia. However, the process has not been widely adopted. Negative environmental outcomes are now acknowledged as not only engineering, scientific or technical problems to be solved but also the result of governance failures. An analysis of the regulatory context for aquifer injection using treated CSG water in Queensland, Australia, using Ostrom’s Common Pool Resource (CPR) theory and a ‘heat map’ designed by the author, highlights the importance of governance arrangements. The analysis reveals the costs and benefits for relevant stakeholders of artificial recharge of groundwater resources in this context. The research also reveals missed opportunities to further active management of the aquifer and resolve existing conflicts between users. The research illustrates the importance of strategically and holistically evaluating innovations in technology that impact water resources to reveal incentives that impact resource user behaviors. The paper presents a proactive step that can be adapted to support integrated water resource management and sustainable groundwater development.Keywords: managed aquifer recharge, groundwater regulation, common-pool resources, integrated water resource management, Australia
Procedia PDF Downloads 237544 An Approach to Building a Recommendation Engine for Travel Applications Using Genetic Algorithms and Neural Networks
Authors: Adrian Ionita, Ana-Maria Ghimes
Abstract:
The lack of features, design and the lack of promoting an integrated booking application are some of the reasons why most online travel platforms only offer automation of old booking processes, being limited to the integration of a smaller number of services without addressing the user experience. This paper represents a practical study on how to improve travel applications creating user-profiles through data-mining based on neural networks and genetic algorithms. Choices made by users and their ‘friends’ in the ‘social’ network context can be considered input data for a recommendation engine. The purpose of using these algorithms and this design is to improve user experience and to deliver more features to the users. The paper aims to highlight a broader range of improvements that could be applied to travel applications in terms of design and service integration, while the main scientific approach remains the technical implementation of the neural network solution. The motivation of the technologies used is also related to the initiative of some online booking providers that have made the fact that they use some ‘neural network’ related designs public. These companies use similar Big-Data technologies to provide recommendations for hotels, restaurants, and cinemas with a neural network based recommendation engine for building a user ‘DNA profile’. This implementation of the ‘profile’ a collection of neural networks trained from previous user choices, can improve the usability and design of any type of application.Keywords: artificial intelligence, big data, cloud computing, DNA profile, genetic algorithms, machine learning, neural networks, optimization, recommendation system, user profiling
Procedia PDF Downloads 163543 Benchmarking Machine Learning Approaches for Forecasting Hotel Revenue
Authors: Rachel Y. Zhang, Christopher K. Anderson
Abstract:
A critical aspect of revenue management is a firm’s ability to predict demand as a function of price. Historically hotels have used simple time series models (regression and/or pick-up based models) owing to the complexities of trying to build casual models of demands. Machine learning approaches are slowly attracting attention owing to their flexibility in modeling relationships. This study provides an overview of approaches to forecasting hospitality demand – focusing on the opportunities created by machine learning approaches, including K-Nearest-Neighbors, Support vector machine, Regression Tree, and Artificial Neural Network algorithms. The out-of-sample performances of above approaches to forecasting hotel demand are illustrated by using a proprietary sample of the market level (24 properties) transactional data for Las Vegas NV. Causal predictive models can be built and evaluated owing to the availability of market level (versus firm level) data. This research also compares and contrast model accuracy of firm-level models (i.e. predictive models for hotel A only using hotel A’s data) to models using market level data (prices, review scores, location, chain scale, etc… for all hotels within the market). The prospected models will be valuable for hotel revenue prediction given the basic characters of a hotel property or can be applied in performance evaluation for an existed hotel. The findings will unveil the features that play key roles in a hotel’s revenue performance, which would have considerable potential usefulness in both revenue prediction and evaluation.Keywords: hotel revenue, k-nearest-neighbors, machine learning, neural network, prediction model, regression tree, support vector machine
Procedia PDF Downloads 132542 MAGNI Dynamics: A Vision-Based Kinematic and Dynamic Upper-Limb Model for Intelligent Robotic Rehabilitation
Authors: Alexandros Lioulemes, Michail Theofanidis, Varun Kanal, Konstantinos Tsiakas, Maher Abujelala, Chris Collander, William B. Townsend, Angie Boisselle, Fillia Makedon
Abstract:
This paper presents a home-based robot-rehabilitation instrument, called ”MAGNI Dynamics”, that utilized a vision-based kinematic/dynamic module and an adaptive haptic feedback controller. The system is expected to provide personalized rehabilitation by adjusting its resistive and supportive behavior according to a fuzzy intelligence controller that acts as an inference system, which correlates the user’s performance to different stiffness factors. The vision module uses the Kinect’s skeletal tracking to monitor the user’s effort in an unobtrusive and safe way, by estimating the torque that affects the user’s arm. The system’s torque estimations are justified by capturing electromyographic data from primitive hand motions (Shoulder Abduction and Shoulder Forward Flexion). Moreover, we present and analyze how the Barrett WAM generates a force-field with a haptic controller to support or challenge the users. Experiments show that by shifting the proportional value, that corresponds to different stiffness factors of the haptic path, can potentially help the user to improve his/her motor skills. Finally, potential areas for future research are discussed, that address how a rehabilitation robotic framework may include multisensing data, to improve the user’s recovery process.Keywords: human-robot interaction, kinect, kinematics, dynamics, haptic control, rehabilitation robotics, artificial intelligence
Procedia PDF Downloads 329541 A Framework for Auditing Multilevel Models Using Explainability Methods
Authors: Debarati Bhaumik, Diptish Dey
Abstract:
Multilevel models, increasingly deployed in industries such as insurance, food production, and entertainment within functions such as marketing and supply chain management, need to be transparent and ethical. Applications usually result in binary classification within groups or hierarchies based on a set of input features. Using open-source datasets, we demonstrate that popular explainability methods, such as SHAP and LIME, consistently underperform inaccuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution to the outcome). Besides accuracy, the computational intractability of SHAP for binomial classification is a cause of concern. For transparent and ethical applications of these hierarchical statistical models, sound audit frameworks need to be developed. In this paper, we propose an audit framework for technical assessment of multilevel regression models focusing on three aspects: (i) model assumptions & statistical properties, (ii) model transparency using different explainability methods, and (iii) discrimination assessment. To this end, we undertake a quantitative approach and compare intrinsic model methods with SHAP and LIME. The framework comprises a shortlist of KPIs, such as PoCE (Percentage of Correct Explanations) and MDG (Mean Discriminatory Gap) per feature, for each of these three aspects. A traffic light risk assessment method is furthermore coupled to these KPIs. The audit framework will assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying multilevel models to be future-proof and aligned with the European Commission’s proposed Regulation on Artificial Intelligence.Keywords: audit, multilevel model, model transparency, model explainability, discrimination, ethics
Procedia PDF Downloads 94540 Cognitive Science Based Scheduling in Grid Environment
Authors: N. D. Iswarya, M. A. Maluk Mohamed, N. Vijaya
Abstract:
Grid is infrastructure that allows the deployment of distributed data in large size from multiple locations to reach a common goal. Scheduling data intensive applications becomes challenging as the size of data sets are very huge in size. Only two solutions exist in order to tackle this challenging issue. First, computation which requires huge data sets to be processed can be transferred to the data site. Second, the required data sets can be transferred to the computation site. In the former scenario, the computation cannot be transferred since the servers are storage/data servers with little or no computational capability. Hence, the second scenario can be considered for further exploration. During scheduling, transferring huge data sets from one site to another site requires more network bandwidth. In order to mitigate this issue, this work focuses on incorporating cognitive science in scheduling. Cognitive Science is the study of human brain and its related activities. Current researches are mainly focused on to incorporate cognitive science in various computational modeling techniques. In this work, the problem solving approach of human brain is studied and incorporated during the data intensive scheduling in grid environments. Here, a cognitive engine is designed and deployed in various grid sites. The intelligent agents present in CE will help in analyzing the request and creating the knowledge base. Depending upon the link capacity, decision will be taken whether to transfer data sets or to partition the data sets. Prediction of next request is made by the agents to serve the requesting site with data sets in advance. This will reduce the data availability time and data transfer time. Replica catalog and Meta data catalog created by the agents assist in decision making process.Keywords: data grid, grid workflow scheduling, cognitive artificial intelligence
Procedia PDF Downloads 394539 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model
Authors: Didier Auroux, Vladimir Groza
Abstract:
This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization
Procedia PDF Downloads 316538 The Sustained Utility of Japan's Human Security Policy
Authors: Maria Thaemar Tana
Abstract:
The paper examines the policy and practice of Japan’s human security. Specifically, it asks the question: How does Japan’s shift towards a more proactive defence posture affect the place of human security in its foreign policy agenda? Corollary to this, how is Japan sustaining its human security policy? The objective of this research is to understand how Japan, chiefly through the Ministry of Foreign Affairs (MOFA) and JICA (Japan International Cooperation Agency), sustains the concept of human security as a policy framework. In addition, the paper also aims to show how and why Japan continues to include the concept in its overall foreign policy agenda. In light of the recent developments in Japan’s security policy, which essentially result from the changing security environment, human security appears to be gradually losing relevance. The paper, however, argues that despite the strategic challenges Japan faced and is facing, as well as the apparent decline of its economic diplomacy, human security remains to be an area of critical importance for Japanese foreign policy. In fact, as Japan becomes more proactive in its international affairs, the strategic value of human security also increases. Human security was initially envisioned to help Japan compensate for its weaknesses in the areas of traditional security, but as Japan moves closer to a more activist foreign policy, the soft policy of human security complements its hard security policies. Using the framework of neoclassical realism (NCR), the paper recognizes that policy-making is essentially a convergence of incentives and constraints at the international and domestic levels. The theory posits that there is no perfect 'transmission belt' linking material power on the one hand, and actual foreign policy on the other. State behavior is influenced by both international- and domestic-level variables, but while systemic pressures and incentives determine the general direction of foreign policy, they are not strong enough to affect the exact details of state conduct. Internal factors such as leaders’ perceptions, domestic institutions, and domestic norms, serve as intervening variables between the international system and foreign policy. Thus, applied to this study, Japan’s sustained utilization of human security as a foreign policy instrument (dependent variable) is essentially a result of systemic pressures (indirectly) (independent variables) and domestic processes (directly) (intervening variables). Two cases of Japan’s human security practice in two regions are examined in two time periods: Iraq in the Middle East (2001-2010) and South Sudan in Africa (2011-2017). The cases show that despite the different motives behind Japan’s decision to participate in these international peacekeepings ad peace-building operations, human security continues to be incorporated in both rhetoric and practice, thus demonstrating that it was and remains to be an important diplomatic tool. Different variables at the international and domestic levels will be examined to understand how the interaction among them results in changes and continuities in Japan’s human security policy.Keywords: human security, foreign policy, neoclassical realism, peace-building
Procedia PDF Downloads 133537 Morphometric Parameters and Evaluation of Persian Fallow Deer Semen in Dashenaz Refuge in Iran
Authors: Behrang Ekrami, Amin Tamadon
Abstract:
Persian fallow deer (Dama dama mesopotamica) is belonging to the family Cervidae and is only found in a few protected areas in the northwest, north, and southwest of Iran. The aims of this study were analysis of inbreeding and morphometric parameters of semen in male Persian fallow deer to investigate the cause of reduced fertility of this endangered species in Dasht-e-Naz National Refuge, Sari, Iran. The Persian fallow deer semen was collected from four adult bucks randomly during the breeding and non-breeding season from five dehorned and horned deer's BY an artificial vagina. Twelve blood samples was taken from Persian fallow deer and mitochondrial DNA was extracted, amplified, extracted, sequenced, and then were considered for genetic analysis. The Persian fallow deer semen, both with normal and abnormal spermatozoa, is similar to that of domestic ruminants but very smaller and difficult to observe at the primary observation. The post-mating season collected ejaculates contained abnormal spermatozoa, debris and secretion of accessory glands in horned bucks and accessory glands secretion free of any spermatozoa in dehorned or early velvet budding bucks. Microscopic evaluation in all four bucks during the mating season showed the mean concentration of 9×106 spermatozoa/ml. The mean ±SD of age, testes length and testes width was 4.60±1.52 years, 3.58±0.32 and 1.86±0.09 cm, respectively. The results identified 1120 loci (assuming each nucleotide as locus) in which 377 were polymorphic. In conclusion, reduced fertility of male Persian fallow deer may be caused by inbreeding of the protected herd in a limited area of Dasht-e-Naz National Refuge.Keywords: Persian fallow deer, spermatozoa, reproductive characteristics, morphometric parameters
Procedia PDF Downloads 577536 Quality of Ram Semen in Relation to Scrotal Biometry
Authors: M. M. Islam, S. Sharmin, M. Shah Newaz, N. S. Juyena, M. M. Rahman, P. K. Jha, F. Y. Bari
Abstract:
The aim of the present study was to select the high quality ram by measuring the scrotal biometry which has an effect on semen parameters. Ten rams were selected in the present study. Eight ejaculates were collected from each ram using artificial vagina method. Scrotal circumference was measured before and after semen collection on weekly basis using the Scrotal tape. Bio-metries of scrotum (scrotal length and scrotal volume) were calculated. Semen was evaluated for macroscopic and microscopic characteristics. The average estimated scrotal circumference (cm) and scrotal volume (cm3) in 8 different age groups were 17.16±0.05 cm and 61.30±0.70 cm3, 17.17±0.62 cm and 63.67±4.49 cm3, 17.22±0.52 cm and 64.90±4.21 cm3, 17.72±0.37 cm and 67.10±4.20 cm3, 18.41±0.35cm and 69.52±4.12cm3, 18.45±0.36cm and 77.17±3.81 cm3, 18.55±0.41 cm and 78.72±4.90 cm3, 19.10±0.30 cm and 87.35±5.45 cm3 respectively. The body weight, scrotal circumference and scrotal volume increased with the progress of age (P < 0.05). Body weight of age group 381-410 days (13.62+1.48 kg) was significantly higher than group 169-200 days (10.17±0.05 kg) and 201-230 days (10.42±1.18 kg) (p < 0.05). Scrotal circumference (SC) of age group 381-410 days (19.10±0.30 cm) was significantly higher (p < 0.05) than other groups. In age group 381-410 days, scrotal volume (SCV) (87.35±5.45 cm3) was significantly higher than other first five groups (p < 0.05). Both scrotal circumference and scrotal volume development was positively correlated with the increasing of body weight (R2= 0.51). Semen volume increased accordingly with the increasing of ages, varied from 0.35±0.00 ml to 1.15+0.26 ml. Semen volume of age group 381-410 days (1.15±0.26 ml) was significantly higher than other age groups (p < 0.05) except age group 351-380 days (p > 0.05). Mass activity of different age groups varied from 2.75 (±0.35) to 4.25 (±0.29) ml in the scale of 1-5. Sperm concentration, progressive motility (%),progressively improved according to the increasing of ages, but significant changes in these parameters were seen when the animals reaches the age 291 days or more (p < 0.05). However, normal spermatozoa (%) improved significantly from the age of 261 days or more. Mass activity (mass) was positively correlated with sperm concentration (R2=0.568) and progressive motility (%) (R2=0.616). The relationships of semen volume with body weight and scrotal measurements and sperm concentration indicate that they are useful in evaluating rams for breeding soundness and genetic improvement for fertility in indigenous ram.Keywords: breeding soundness, ram, semen quality, scrotal biometry
Procedia PDF Downloads 366535 Space Weather and Earthquakes: A Case Study of Solar Flare X9.3 Class on September 6, 2017
Authors: Viktor Novikov, Yuri Ruzhin
Abstract:
The studies completed to-date on a relation of the Earth's seismicity and solar processes provide the fuzzy and contradictory results. For verification of an idea that solar flares can trigger earthquakes, we have analyzed a case of a powerful surge of solar flash activity early in September 2017 during approaching the minimum of 24th solar cycle was accompanied by significant disturbances of space weather. On September 6, 2017, a group of sunspots AR2673 generated a large solar flare of X9.3 class, the strongest flare over the past twelve years. Its explosion produced a coronal mass ejection partially directed towards the Earth. We carried out a statistical analysis of the catalogs of earthquakes USGS and EMSC for determination of the effect of solar flares on global seismic activity. New evidence of earthquake triggering due to the Sun-Earth interaction has been demonstrated by simple comparison of behavior of Earth's seismicity before and after the strong solar flare. The global number of earthquakes with magnitude of 2.5 to 5.5 within 11 days after the solar flare has increased by 30 to 100%. A possibility of electric/electromagnetic triggering of earthquake due to space weather disturbances is supported by results of field and laboratory studies, where the earthquakes (both natural and laboratory) were initiated by injection of electrical current into the Earth crust. For the specific case of artificial electric earthquake triggering the current density at a depth of earthquake, sources are comparable with estimations of a density of telluric currents induced by variation of space weather conditions due to solar flares. Acknowledgment: The work was supported by RFBR grant No. 18-05-00255.Keywords: solar flare, earthquake activity, earthquake triggering, solar-terrestrial relations
Procedia PDF Downloads 143534 Design and Optimization of an Electromagnetic Vibration Energy Converter
Authors: Slim Naifar, Sonia Bradai, Christian Viehweger, Olfa Kanoun
Abstract:
Vibration provides an interesting source of energy since it is available in many indoor and outdoor applications. Nevertheless, in order to have an efficient design of the harvesting system, vibration converters have to satisfy some criterion in terms of robustness, compactness and energy outcome. In this work, an electromagnetic converter based on mechanical spring principle is proposed. The designed harvester is formed by a coil oscillating around ten ring magnets using a mechanical spring. The proposed design overcomes one of the main limitation of the moving coil by avoiding the contact between the coil wires with the mechanical spring which leads to a better robustness for the converter. In addition, the whole system can be implemented in a cavity of a screw. Different parameters in the harvester were investigated by finite element method including the magnet size, the coil winding number and diameter and the excitation frequency and amplitude. A prototype was realized and tested. Experiments were performed for 0.5 g to 1 g acceleration. The used experimental setup consists of an electrodynamic shaker as an external artificial vibration source controlled by a laser sensor to measure the applied displacement and frequency excitation. Together with the laser sensor, a controller unit, and an amplifier, the shaker is operated in a closed loop which allows controlling the vibration amplitude. The resonance frequency of the proposed designs is in the range of 24 Hz. Results indicate that the harvester can generate 612 mV and 1150 mV maximum open circuit peak to peak voltage at resonance for 0.5 g and 1 g acceleration respectively which correspond to 4.75 mW and 1.34 mW output power. Tuning the frequency to other values is also possible due to the possibility to add mass to the moving part of the or by changing the mechanical spring stiffness.Keywords: energy harvesting, electromagnetic principle, vibration converter, moving coil
Procedia PDF Downloads 297533 Intelligent Scaffolding Diagnostic Tutoring Systems to Enhance Students’ Academic Reading Skills
Authors: A.Chayaporn Kaoropthai, B. Onjaree Natakuatoong, C. Nagul Cooharojananone
Abstract:
The first year is usually the most critical year for university students. Generally, a considerable number of first-year students worldwide drop out of university every year. One of the major reasons for dropping out is failing. Although they are supposed to have mastered sufficient English proficiency upon completing their high school education, most first-year students are still novices in academic reading. Due to their lack of experience in academic reading, first-year students need significant support from teachers to help develop their academic reading skills. Reading strategies training is thus a necessity and plays a crucial role in classroom instruction. However, individual differences in both students, as well as teachers, are the main factors contributing to the failure in not responding to each individual student’s needs. For this reason, reading strategies training inevitably needs a diagnosis of students’ academic reading skills levels before, during, and after learning, in order to respond to their different needs. To further support reading strategies training, scaffolding is proposed to facilitate students in understanding and practicing using reading strategies under the teachers’ guidance. The use of the Intelligent Tutoring Systems (ITSs) as a tool for diagnosing students’ reading problems will be very beneficial to both students and their teachers. The ITSs consist of four major modules: the Expert module, the Student module, the Diagnostic module, and the User Interface module. The application of Artificial Intelligence (AI) enables the systems to perform diagnosis consistently and appropriately for each individual student. Thus, it is essential to develop the Intelligent Scaffolding Diagnostic Reading Strategies Tutoring Systems to enhance first-year students’ academic reading skills. The systems proposed will contribute to resolving classroom reading strategies training problems, developing students’ academic reading skills, and facilitating teachers.Keywords: academic reading, intelligent tutoring systems, scaffolding, university students
Procedia PDF Downloads 390532 The Effectiveness of Transcranial Electrical Stimulation on Brain Wave Pattern and Blood Pressure in Patients with Generalized Anxiety Disorder
Authors: Mahtab Baghaei, Seyed Mahmoud Tabatabaei
Abstract:
Aim & Background: Electrical stimulation of transcranial direct current is considered one of the treatment methods for mental disorders. The aim of this study was to evaluate the effectiveness of transcranial electrical stimulation on the delta, theta, alpha, beta and systolic and diastolic blood pressure in patients with generalized anxiety disorder. Materials and Methods: The present study was a double-blind intervention with a pre-test and post-test design on people with generalized anxiety disorder in Tabriz in 1400. In this study, 30 patients with generalized anxiety disorder were selected by purposive sampling method based on the criteria specified in DSM-5 and randomly divided into an experimental group (n = 15) and a control group (n = 15). The experimental group received two sessions of 30 minutes of electrical stimulation of transcranial direct current with an intensity of 2 mA in the area of the lateral dorsal prefrontal cortex, and the control group also received artificial stimulation. Results: The results showed that transcranial electrical stimulation reduces delta and theta waves and increases beta and alpha brain waves in the experimental group. On the other hand, this method also showed a significant decrease in systolic and diastolic blood pressure in these patients (p <0.01). Conclusion: The results show that transcranial electrical stimulation has a statistically significant effect on brain waves and blood pressure, and this non-invasive method can be used as one of the treatment methods in people with generalized anxiety disorder.Keywords: transcranial direct current electrical stimulation, brain waves, systolic blood pressure, diastolic blood pressure
Procedia PDF Downloads 102531 Design and Optimization of a Small Hydraulic Propeller Turbine
Authors: Dario Barsi, Marina Ubaldi, Pietro Zunino, Robert Fink
Abstract:
A design and optimization procedure is proposed and developed to provide the geometry of a high efficiency compact hydraulic propeller turbine for low head. For the preliminary design of the machine, classic design criteria, based on the use of statistical correlations for the definition of the fundamental geometric parameters and the blade shapes are used. These relationships are based on the fundamental design parameters (i.e., specific speed, flow coefficient, work coefficient) in order to provide a simple yet reliable procedure. Particular attention is paid, since from the initial steps, on the correct conformation of the meridional channel and on the correct arrangement of the blade rows. The preliminary geometry thus obtained is used as a starting point for the hydrodynamic optimization procedure, carried out using a CFD calculation software coupled with a genetic algorithm that generates and updates a large database of turbine geometries. The optimization process is performed using a commercial approach that solves the turbulent Navier Stokes equations (RANS) by exploiting the axial-symmetric geometry of the machine. The geometries generated within the database are therefore calculated in order to determine the corresponding overall performance. In order to speed up the optimization calculation, an artificial neural network (ANN) based on the use of an objective function is employed. The procedure was applied for the specific case of a propeller turbine with an innovative design of a modular type, specific for applications characterized by very low heads. The procedure is tested in order to verify its validity and the ability to automatically obtain the targeted net head and the maximum for the total to total internal efficiency.Keywords: renewable energy conversion, hydraulic turbines, low head hydraulic energy, optimization design
Procedia PDF Downloads 150530 Neural Network Approach For Clustering Host Community: Based on Perceptions Toward Tourism, Their Satisfaction Level and Demographic Attributes in Iran (Lahijan)
Authors: Nasibeh Mohammadpour, Ali Rajabzadeh, Adel Azar, Hamid Zargham Borujeni,
Abstract:
Generally, various industries development depends on their stakeholders and beneficiaries supports. One of the most important stakeholders in tourism industry ( which has become one of the most important lucrative and employment-generating activities at the international level these days) are host communities in tourist destination which are affected and effect on this industry development. Recognizing host community and its segmentations can be important to get their support for future decisions and policy making. In order to identify these segments, in this study, clustering of the residents has been done by using some tools that are designed to encounter human complexities and have ability to model and generalize complex systems without any needs for the initial clusters’ seeds like classic methods. Neural networks can help to meet these expectations. The research have been planned to design neural networks-based mathematical model for clustering the host community effectively according to multi criteria, and identifies differences among segments. In order to achieve this goal, the residents’ segmentation has been done by demographic characteristics, their attitude towards the tourism development, the level of satisfaction and the type of their support in this field. The applied method is self-organized neural networks and the results have compared with K-means. As the results show, the use of Self- Organized Map (SOM) method provides much better results by considering the Cophenetic correlation and between clusters variance coefficients. Based on these criteria, the host community is divided into five sections with unique and distinctive features, which are in the best condition (in comparison other modes) according to Cophenetic correlation coefficient of 0.8769 and between clusters variance of 0.1412.Keywords: Artificial Nural Network, Clustering , Resident, SOM, Tourism
Procedia PDF Downloads 183529 Preparation of Zinc Oxide Nanoparticles and Its Anti-diabetic Effect with Momordica Charantia Plant Extract in Diabetic Mice
Authors: Zahid Hussain, Nayyab Sultan
Abstract:
This study describes the preparation of zinc oxide nanoparticles and their anti-diabetic effect individually and with the combination of Momordica charantia plant extract. This plant is termed bitter melon, balsam pear, bitter gourd, or karela. Blood glucose levels in mice were monitored in their random state before and after the administration of zinc oxide nanoparticles and plant extract. The powdered form of nanoparticles and the selected plant were used as an oral treatment. Diabetes was induced in mice by using a chemical named as streptozotocin. It is an artificial diabetes-inducing chemical. In the case of zinc oxide nanoparticles (3mg/kg) and Momordica charantia plant extract (500mg/kg); the maximum anti-diabetic effect observed was 70% ± 1.6 and 75% ± 1.3, respectively. In the case of the combination of zinc oxide nanoparticles (3mg/kg) and Momordica charantia plant extract (500mg/kg), the maximum anti-diabetic effect observed was 86% ± 2.0. The results obtained were more effective as compared to standard drugs Amaryl (3mg/kg), having an effectiveness of 52% ± 2.4, and Glucophage (500mg/kg), having an effectiveness of 29% ± 2.1. Results indicate that zinc oxide nanoparticles and plant extract in combination are more helpful in treating diabetes as compared to their individual treatments. It is considered a natural treatment without any side effects rather than using standard drugs, which shows adverse side effects on health, and most probably detoxifies in liver and kidneys. More experimental work and extensive research procedures are still required in order to make them applicable to pharmaceutical industries.Keywords: albino mice, amaryl, anti-diabetic effect, blood glucose level, Camellia sinensis, diabetes mellitus, Momordica charantia plant extract, streptozotocin, zinc oxide nanoparticles
Procedia PDF Downloads 112