Search results for: artificial islands.
1437 A Hybrid Simulation Approach to Evaluate Cooling Energy Consumption for Public Housings of Subtropics
Authors: Kwok W. Mui, Ling T. Wong, Chi T. Cheung
Abstract:
Cooling energy consumption in the residential sector, different from shopping mall, office or commercial buildings, is significantly subject to occupant decisions where in-depth investigations are found limited. It shows that energy consumptions could be associated with housing types. Surveys have been conducted in existing Hong Kong public housings to understand the housing characteristics, apartment electricity demands, occupant’s thermal expectations, and air–conditioning usage patterns for further cooling energy-saving assessments. The aim of this study is to develop a hybrid cooling energy prediction model, which integrated by EnergyPlus (EP) and artificial neural network (ANN) to estimate cooling energy consumption in public residential sector. Sensitivity tests are conducted to find out the energy impacts with changing building parameters regarding to external wall and window material selection, window size reduction, shading extension, building orientation and apartment size control respectively. Assessments are performed to investigate the relationships between cooling demands and occupant behavior on thermal environment criteria and air-conditioning operation patterns. The results are summarized into a cooling energy calculator for layman use to enhance the cooling energy saving awareness in their own living environment. The findings can be used as a directory framework for future cooling energy evaluation in residential buildings, especially focus on the occupant behavioral air–conditioning operation and criteria of energy-saving incentives.Keywords: artificial neural network, cooling energy, occupant behavior, residential buildings, thermal environment
Procedia PDF Downloads 1681436 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant
Authors: Michael Smalenberger
Abstract:
Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation
Procedia PDF Downloads 1721435 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers
Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang
Abstract:
In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.Keywords: centrality, patent coupling network, patent influence, social network analysis
Procedia PDF Downloads 541434 Evaluation of National Research Motivation Evolution with Improved Social Influence Network Theory Model: A Case Study of Artificial Intelligence
Authors: Yating Yang, Xue Zhang, Chengli Zhao
Abstract:
In the increasingly interconnected global environment brought about by globalization, it is crucial for countries to timely grasp the development motivations in relevant research fields of other countries and seize development opportunities. Motivation, as the intrinsic driving force behind actions, is abstract in nature, making it difficult to directly measure and evaluate. Drawing on the ideas of social influence network theory, the research motivations of a country can be understood as the driving force behind the development of its science and technology sector, which is simultaneously influenced by both the country itself and other countries/regions. In response to this issue, this paper improves upon Friedkin's social influence network theory and applies it to motivation description, constructing a dynamic alliance network and hostile network centered around the United States and China, as well as a sensitivity matrix, to remotely assess the changes in national research motivations under the influence of international relations. Taking artificial intelligence as a case study, the research reveals that the motivations of most countries/regions are declining, gradually shifting from a neutral attitude to a negative one. The motivation of the United States is hardly influenced by other countries/regions and remains at a high level, while the motivation of China has been consistently increasing in recent years. By comparing the results with real data, it is found that this model can reflect, to some extent, the trends in national motivations.Keywords: influence network theory, remote assessment, relation matrix, dynamic sensitivity matrix
Procedia PDF Downloads 681433 Artificial Insemination of Bali Cattle with Frozen-Thawed Sexed Sperm Under District AI Station Conditions in Lombok: A Preliminary Trial
Authors: Chairussyuhur Arman, Totti Tjiptosumirat, Muhammad Gunawan, Mastur, Joko Priyono, Baiq Tri Ratna Erawati
Abstract:
The present study was undertaken to synchronize oestrus of bali cattle and artificially inseminated with frozen-thawed sexed-semen. The experiment was carried out at District AI Station. Four pluriparous cows and four nulliparous heifers were used in this study and they were housed in free stall barns. The heifers fed with corn silage supplemented with UMMB, while the cows fed with green fodder. All animals were given 500 mg cloprostenolum i.m. injections PGF2α twice, 11 days apart, to synchronize the occurrence of estrus. Estrus was detected by visual observation twice a day and determined if all cattle accepted mount from other females. All animals were inseminated twice with Bali sexed-semen at 72 and 76 h after observed oestrus. Results suggested that the percentage of calving rate either for pluriparous cows or nulliparous heifers were recorded to be 75 percent. One cow and one heifer did not produce calves because of embryonic lost. Regardless the sex of calves, the mean of birth weight of calves in cows was higher than that of heifers (18.50 ± 2.60 kg vs 13.83 ± 5.20 kg). One female calf from heifer with lower birth weight (8.0 kg) was dead one day after born. In pluriparous group, two cows delivered male calves and the other delivered female calf. Conversely in nulliparous group, two heifers delivered female calves and the other male calf. It is concluded that under the conditions of this preliminary trials, the sex ratio between pluriparous and nulliparous groups was found to be 50:50 (male:female).Keywords: artificial insemination, bali cattle, calves, sexed sperm
Procedia PDF Downloads 3111432 Lung HRCT Pattern Classification for Cystic Fibrosis Using a Convolutional Neural Network
Authors: Parisa Mansour
Abstract:
Cystic fibrosis (CF) is one of the most common autosomal recessive diseases among whites. It mostly affects the lungs, causing infections and inflammation that account for 90% of deaths in CF patients. Because of this high variability in clinical presentation and organ involvement, investigating treatment responses and evaluating lung changes over time is critical to preventing CF progression. High-resolution computed tomography (HRCT) greatly facilitates the assessment of lung disease progression in CF patients. Recently, artificial intelligence was used to analyze chest CT scans of CF patients. In this paper, we propose a convolutional neural network (CNN) approach to classify CF lung patterns in HRCT images. The proposed network consists of two convolutional layers with 3 × 3 kernels and maximally connected in each layer, followed by two dense layers with 1024 and 10 neurons, respectively. The softmax layer prepares a predicted output probability distribution between classes. This layer has three exits corresponding to the categories of normal (healthy), bronchitis and inflammation. To train and evaluate the network, we constructed a patch-based dataset extracted from more than 1100 lung HRCT slices obtained from 45 CF patients. Comparative evaluation showed the effectiveness of the proposed CNN compared to its close peers. Classification accuracy, average sensitivity and specificity of 93.64%, 93.47% and 96.61% were achieved, indicating the potential of CNNs in analyzing lung CF patterns and monitoring lung health. In addition, the visual features extracted by our proposed method can be useful for automatic measurement and finally evaluation of the severity of CF patterns in lung HRCT images.Keywords: HRCT, CF, cystic fibrosis, chest CT, artificial intelligence
Procedia PDF Downloads 651431 Weakly Solving Kalah Game Using Artificial Intelligence and Game Theory
Authors: Hiba El Assibi
Abstract:
This study aims to weakly solve Kalah, a two-player board game, by developing a start-to-finish winning strategy using an optimized Minimax algorithm with Alpha-Beta Pruning. In weakly solving Kalah, our focus is on creating an optimal strategy from the game's beginning rather than analyzing every possible position. The project will explore additional enhancements like symmetry checking and code optimizations to speed up the decision-making process. This approach is expected to give insights into efficient strategy formulation in board games and potentially help create games with a fair distribution of outcomes. Furthermore, this research provides a unique perspective on human versus Artificial Intelligence decision-making in strategic games. By comparing the AI-generated optimal moves with human choices, we can explore how seemingly advantageous moves can, in the long run, be harmful, thereby offering a deeper understanding of strategic thinking and foresight in games. Moreover, this paper discusses the evaluation of our strategy against existing methods, providing insights on performance and computational efficiency. We also discuss the scalability of our approach to the game, considering different board sizes (number of pits and stones) and rules (different variations) and studying how that affects performance and complexity. The findings have potential implications for the development of AI applications in strategic game planning, enhancing our understanding of human cognitive processes in game settings, and offer insights into creating balanced and engaging game experiences.Keywords: minimax, alpha beta pruning, transposition tables, weakly solving, game theory
Procedia PDF Downloads 551430 Atom Probe Study of Early Stage of Precipitation on Binary Al-Li, Al-Cu Alloys and Ternary Al-Li-Cu Alloys
Authors: Muna Khushaim
Abstract:
Aluminum-based alloys play a key role in modern engineering, especially in the aerospace industry. Introduction of solute atoms such as Li and Cu is the main approach to improve the strength in age-hardenable Al alloys via the precipitation hardening phenomenon. Knowledge of the decomposition process of the microstructure during the precipitation reaction is particularly important for future technical developments. The objective of this study is to investigate the nano-scale chemical composition in the Al-Cu, Al-Li and Al-Li-Cu during the early stage of the precipitation sequence and to describe whether this compositional difference correlates with variations in the observed precipitation kinetics. Comparing the random binomial frequency distribution and the experimental frequency distribution of concentrations in atom probe tomography data was used to investigate the early stage of decomposition in the different binary and ternary alloys which were experienced different heat treatments. The results show that an Al-1.7 at.% Cu alloy requires a long ageing time of approximately 8 h at 160 °C to allow the diffusion of Cu atoms into Al matrix. For the Al-8.2 at.% Li alloy, a combination of both the natural ageing condition (48 h at room temperature) and a short artificial ageing condition (5 min at 160 °C) induces increasing on the number density of the Li clusters and hence increase number of precipitated δ' particles. Applying this combination of natural ageing and short artificial ageing conditions onto the ternary Al-4 at.% Li-1.7 at.% Cu alloy induces the formation of a Cu-rich phase. Increasing the Li content in the ternary alloy up to 8 at.% and increasing the ageing time to 30 min resulted in the precipitation processes ending with δ' particles. Thus, the results contribute to the understanding of Al-alloy design.Keywords: aluminum alloy, atom probe tomography, early stage, decomposition
Procedia PDF Downloads 3431429 Virtual Metering and Prediction of Heating, Ventilation, and Air Conditioning Systems Energy Consumption by Using Artificial Intelligence
Authors: Pooria Norouzi, Nicholas Tsang, Adam van der Goes, Joseph Yu, Douglas Zheng, Sirine Maleej
Abstract:
In this study, virtual meters will be designed and used for energy balance measurements of an air handling unit (AHU). The method aims to replace traditional physical sensors in heating, ventilation, and air conditioning (HVAC) systems with simulated virtual meters. Due to the inability to manage and monitor these systems, many HVAC systems have a high level of inefficiency and energy wastage. Virtual meters are implemented and applied in an actual HVAC system, and the result confirms the practicality of mathematical sensors for alternative energy measurement. While most residential buildings and offices are commonly not equipped with advanced sensors, adding, exploiting, and monitoring sensors and measurement devices in the existing systems can cost thousands of dollars. The first purpose of this study is to provide an energy consumption rate based on available sensors and without any physical energy meters. It proves the performance of virtual meters in HVAC systems as reliable measurement devices. To demonstrate this concept, mathematical models are created for AHU-07, located in building NE01 of the British Columbia Institute of Technology (BCIT) Burnaby campus. The models will be created and integrated with the system’s historical data and physical spot measurements. The actual measurements will be investigated to prove the models' accuracy. Based on preliminary analysis, the resulting mathematical models are successful in plotting energy consumption patterns, and it is concluded confidently that the results of the virtual meter will be close to the results that physical meters could achieve. In the second part of this study, the use of virtual meters is further assisted by artificial intelligence (AI) in the HVAC systems of building to improve energy management and efficiency. By the data mining approach, virtual meters’ data is recorded as historical data, and HVAC system energy consumption prediction is also implemented in order to harness great energy savings and manage the demand and supply chain effectively. Energy prediction can lead to energy-saving strategies and considerations that can open a window in predictive control in order to reach lower energy consumption. To solve these challenges, the energy prediction could optimize the HVAC system and automates energy consumption to capture savings. This study also investigates AI solutions possibility for autonomous HVAC efficiency that will allow quick and efficient response to energy consumption and cost spikes in the energy market.Keywords: virtual meters, HVAC, artificial intelligence, energy consumption prediction
Procedia PDF Downloads 1051428 Voting Representation in Social Networks Using Rough Set Techniques
Authors: Yasser F. Hassan
Abstract:
Social networking involves use of an online platform or website that enables people to communicate, usually for a social purpose, through a variety of services, most of which are web-based and offer opportunities for people to interact over the internet, e.g. via e-mail and ‘instant messaging’, by analyzing the voting behavior and ratings of judges in a popular comments in social networks. While most of the party literature omits the electorate, this paper presents a model where elites and parties are emergent consequences of the behavior and preferences of voters. The research in artificial intelligence and psychology has provided powerful illustrations of the way in which the emergence of intelligent behavior depends on the development of representational structure. As opposed to the classical voting system (one person – one decision – one vote) a new voting system is designed where agents with opposed preferences are endowed with a given number of votes to freely distribute them among some issues. The paper uses ideas from machine learning, artificial intelligence and soft computing to provide a model of the development of voting system response in a simulated agent. The modeled development process involves (simulated) processes of evolution, learning and representation development. The main value of the model is that it provides an illustration of how simple learning processes may lead to the formation of structure. We employ agent-based computer simulation to demonstrate the formation and interaction of coalitions that arise from individual voter preferences. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior.Keywords: voting system, rough sets, multi-agent, social networks, emergence, power indices
Procedia PDF Downloads 3931427 Tourism Area Development Optimation Based on Solar-Generated Renewable Energy Technology at Karimunjawa, Central Java Province, Indonesia
Authors: Yanuar Tri Wahyu Saputra, Ramadhani Pamapta Putra
Abstract:
Karimunjawa is one among Indonesian islands which is lacking of electricity supply. Despite condition above, Karimunjawa is an important tourism object in Indonesia's Central Java Province. Solar Power Plant is a potential technology to be applied in Karimunjawa, in order to fulfill the island's electrical supply need and to increase daily life and tourism quality among tourists and local population. This optimation modeling of Karimunjawa uses HOMER software program. The data we uses include wind speed data in Karimunjawa from BMKG (Indonesian Agency for Meteorology, Climatology and Geophysics), annual weather data in Karimunjawa from NASA, electricity requirements assumption data based on number of houses and business infrastructures in Karimunjawa. This modeling aims to choose which three system categories offer the highest financial profit with the lowest total Net Present Cost (NPC). The first category uses only PV with 8000 kW of electrical power and NPC value of $6.830.701. The second category uses hybrid system which involves both 1000 kW PV and 100 kW generator which results in total NPC of $6.865.590. The last category uses only generator with 750 kW of electrical power that results in total NPC of $ 16.368.197, the highest total NPC among the three categories. Based on the analysis above, we can conclude that the most optimal way to fulfill the electricity needs in Karimunjawa is to use 8000 kW PV with lower maintenance cost.Keywords: Karimunjawa, renewable energy, solar power plant, HOMER
Procedia PDF Downloads 4671426 A Deep Learning Approach to Calculate Cardiothoracic Ratio From Chest Radiographs
Authors: Pranav Ajmera, Amit Kharat, Tanveer Gupte, Richa Pant, Viraj Kulkarni, Vinay Duddalwar, Purnachandra Lamghare
Abstract:
The cardiothoracic ratio (CTR) is the ratio of the diameter of the heart to the diameter of the thorax. An abnormal CTR, that is, a value greater than 0.55, is often an indicator of an underlying pathological condition. The accurate prediction of an abnormal CTR from chest X-rays (CXRs) aids in the early diagnosis of clinical conditions. We propose a deep learning-based model for automatic CTR calculation that can assist the radiologist with the diagnosis of cardiomegaly and optimize the radiology flow. The study population included 1012 posteroanterior (PA) CXRs from a single institution. The Attention U-Net deep learning (DL) architecture was used for the automatic calculation of CTR. A CTR of 0.55 was used as a cut-off to categorize the condition as cardiomegaly present or absent. An observer performance test was conducted to assess the radiologist's performance in diagnosing cardiomegaly with and without artificial intelligence (AI) assistance. The Attention U-Net model was highly specific in calculating the CTR. The model exhibited a sensitivity of 0.80 [95% CI: 0.75, 0.85], precision of 0.99 [95% CI: 0.98, 1], and a F1 score of 0.88 [95% CI: 0.85, 0.91]. During the analysis, we observed that 51 out of 1012 samples were misclassified by the model when compared to annotations made by the expert radiologist. We further observed that the sensitivity of the reviewing radiologist in identifying cardiomegaly increased from 40.50% to 88.4% when aided by the AI-generated CTR. Our segmentation-based AI model demonstrated high specificity and sensitivity for CTR calculation. The performance of the radiologist on the observer performance test improved significantly with AI assistance. A DL-based segmentation model for rapid quantification of CTR can therefore have significant potential to be used in clinical workflows.Keywords: cardiomegaly, deep learning, chest radiograph, artificial intelligence, cardiothoracic ratio
Procedia PDF Downloads 981425 Classifying Students for E-Learning in Information Technology Course Using ANN
Authors: Sirilak Areerachakul, Nat Ployong, Supayothin Na Songkla
Abstract:
This research’s objective is to select the model with most accurate value by using Neural Network Technique as a way to filter potential students who enroll in IT course by electronic learning at Suan Suanadha Rajabhat University. It is designed to help students selecting the appropriate courses by themselves. The result showed that the most accurate model was 100 Folds Cross-validation which had 73.58% points of accuracy.Keywords: artificial neural network, classification, students, e-learning
Procedia PDF Downloads 4261424 Incentive Policies to Promote Green Infrastructure in Urban Jordan
Authors: Zayed Freah Zeadat
Abstract:
The wellbeing of urban dwellers is strongly associated with the quality and quantity of green infrastructure. Nevertheless, urban green infrastructure is still lagging in many Arab cities, and Jordan is no exception. The capital city of Jordan, Amman, is becoming more urban dense with limited green spaces. The unplanned urban growth in Amman has caused several environmental problems such as urban heat islands, air pollution, and lack of green spaces. This study aims to investigate the most suitable drivers to leverage the implementation of urban green infrastructure in Jordan through qualitative and quantitative analysis. The qualitative research includes an extensive literature review to discuss the most common drivers used internationally to promote urban green infrastructure implementation in the literature. The quantitative study employs a questionnaire survey to rank the suitability of each driver. Consultants, contractors, and policymakers were invited to fill the research questionnaire according to their judgments and opinions. Relative Importance Index has been used to calculate the weighted average of all drivers and the Kruskal-Wallis test to check the degree of agreement among groups. This study finds that research participants agreed that indirect financial incentives (i.e., tax reductions, reduction in stormwater utility fee, reduction of interest rate, density bonus, etc.) are the most effective incentive policy whilst granting sustainability certificate policy is the least effective driver to ensure widespread of UGI is elements in Jordan.Keywords: urban green infrastructure, relative importance index, sustainable urban development, urban Jordan
Procedia PDF Downloads 1541423 Improvement of Microscopic Detection of Acid-Fast Bacilli for Tuberculosis by Artificial Intelligence-Assisted Microscopic Platform and Medical Image Recognition System
Authors: Hsiao-Chuan Huang, King-Lung Kuo, Mei-Hsin Lo, Hsiao-Yun Chou, Yusen Lin
Abstract:
The most robust and economical method for laboratory diagnosis of TB is to identify mycobacterial bacilli (AFB) under acid-fast staining despite its disadvantages of low sensitivity and labor-intensive. Though digital pathology becomes popular in medicine, an automated microscopic system for microbiology is still not available. A new AI-assisted automated microscopic system, consisting of a microscopic scanner and recognition program powered by big data and deep learning, may significantly increase the sensitivity of TB smear microscopy. Thus, the objective is to evaluate such an automatic system for the identification of AFB. A total of 5,930 smears was enrolled for this study. An intelligent microscope system (TB-Scan, Wellgen Medical, Taiwan) was used for microscopic image scanning and AFB detection. 272 AFB smears were used for transfer learning to increase the accuracy. Referee medical technicians were used as Gold Standard for result discrepancy. Results showed that, under a total of 1726 AFB smears, the automated system's accuracy, sensitivity and specificity were 95.6% (1,650/1,726), 87.7% (57/65), and 95.9% (1,593/1,661), respectively. Compared to culture, the sensitivity for human technicians was only 33.8% (38/142); however, the automated system can achieve 74.6% (106/142), which is significantly higher than human technicians, and this is the first of such an automated microscope system for TB smear testing in a controlled trial. This automated system could achieve higher TB smear sensitivity and laboratory efficiency and may complement molecular methods (eg. GeneXpert) to reduce the total cost for TB control. Furthermore, such an automated system is capable of remote access by the internet and can be deployed in the area with limited medical resources.Keywords: TB smears, automated microscope, artificial intelligence, medical imaging
Procedia PDF Downloads 2291422 A Boundary-Fitted Nested Grid Model for Modeling Tsunami Propagation of 2004 Indonesian Tsunami along Southern Thailand
Authors: Fazlul Karim, Esa Al-Islam
Abstract:
Many problems in oceanography and environmental sciences require the solution of shallow water equations on physical domains having curvilinear coastlines and abrupt changes of ocean depth near the shore. Finite-difference technique for the shallow water equations representing the boundary as stair step may give inaccurate results near the coastline where results are of greatest interest for various applications. This suggests the use of methods which are capable of incorporating the irregular boundary in coastal belts. At the same time, large velocity gradient is expected near the beach and islands as water depth vary abruptly near the coast. A nested numerical scheme with fine resolution is the best resort to enhance the numerical accuracy with the least grid numbers for the region of interests where the velocity changes rapidly and which is unnecessary for the away of the region. This paper describes the development of a boundary fitted nested grid (BFNG) model to compute tsunami propagation of 2004 Indonesian tsunami in Southern Thailand coastal waters. In this paper, we develop a numerical model employing the shallow water nested model and an orthogonal boundary fitted grid to investigate the tsunami impact on the Southern Thailand due to the Indonesian tsunami of 2004. Comparisons of water surface elevation obtained from numerical simulations and field measurements are made.Keywords: Indonesian tsunami of 2004, Boundary-fitted nested grid model, Southern Thailand, finite difference method
Procedia PDF Downloads 4411421 From Battles to Balance and Back: Document Analysis of EU Copyright in the Digital Era
Authors: Anette Alén
Abstract:
Intellectual property (IP) regimes have traditionally been designed to integrate various conflicting elements stemming from private entitlement and the public good. In IP laws and regulations, this design takes the form of specific uses of protected subject-matter without the right-holder’s consent, or exhaustion of exclusive rights upon market release, and the like. More recently, the pursuit of ‘balance’ has gained ground in the conceptualization of these conflicting elements both in terms of IP law and related policy. This can be seen, for example, in European Union (EU) copyright regime, where ‘balance’ has become a key element in argumentation, backed up by fundamental rights reasoning. This development also entails an ever-expanding dialogue between the IP regime and the constitutional safeguards for property, free speech, and privacy, among others. This study analyses the concept of ‘balance’ in EU copyright law: the research task is to examine the contents of the concept of ‘balance’ and the way it is operationalized and pursued, thereby producing new knowledge on the role and manifestations of ‘balance’ in recent copyright case law and regulatory instruments in the EU. The study discusses two particular pieces of legislation, the EU Digital Single Market (DSM) Copyright Directive (EU) 2019/790 and the finalized EU Artificial Intelligence (AI) Act, including some of the key preparatory materials, as well as EU Court of Justice (CJEU) case law pertaining to copyright in the digital era. The material is examined by means of document analysis, mapping the ways ‘balance’ is approached and conceptualized in the documents. Similarly, the interaction of fundamental rights as part of the balancing act is also analyzed. Doctrinal study of law is also employed in the analysis of legal sources. This study suggests that the pursuit of balance is, for its part, conducive to new battles, largely due to the advancement of digitalization and more recent developments in artificial intelligence. Indeed, the ‘balancing act’ rather presents itself as a way to bypass or even solidify some of the conflicting interests in a complex global digital economy. Indeed, such a conceptualization, especially when accompanied by non-critical or strategically driven fundamental rights argumentation, runs counter to the genuine acknowledgment of new types of conflicting interests in the copyright regime. Therefore, a more radical approach, including critical analysis of the normative basis and fundamental rights implications of the concept of ‘balance’, is required to readjust copyright law and regulations for the digital era. Notwithstanding the focus on executing the study in the context of the EU copyright regime, the results bear wider significance for the digital economy, especially due to the platform liability regime in the DSM Directive and with the AI Act including objectives of a ‘level playing field’ whereby compliance with EU copyright rules seems to be expected among system providers.Keywords: balance, copyright, fundamental rights, platform liability, artificial intelligence
Procedia PDF Downloads 311420 Covid Medical Imaging Trial: Utilising Artificial Intelligence to Identify Changes on Chest X-Ray of COVID
Authors: Leonard Tiong, Sonit Singh, Kevin Ho Shon, Sarah Lewis
Abstract:
Investigation into the use of artificial intelligence in radiology continues to develop at a rapid rate. During the coronavirus pandemic, the combination of an exponential increase in chest x-rays and unpredictable staff shortages resulted in a huge strain on the department's workload. There is a World Health Organisation estimate that two-thirds of the global population does not have access to diagnostic radiology. Therefore, there could be demand for a program that could detect acute changes in imaging compatible with infection to assist with screening. We generated a conventional neural network and tested its efficacy in recognizing changes compatible with coronavirus infection. Following ethics approval, a deidentified set of 77 normal and 77 abnormal chest x-rays in patients with confirmed coronavirus infection were used to generate an algorithm that could train, validate and then test itself. DICOM and PNG image formats were selected due to their lossless file format. The model was trained with 100 images (50 positive, 50 negative), validated against 28 samples (14 positive, 14 negative), and tested against 26 samples (13 positive, 13 negative). The initial training of the model involved training a conventional neural network in what constituted a normal study and changes on the x-rays compatible with coronavirus infection. The weightings were then modified, and the model was executed again. The training samples were in batch sizes of 8 and underwent 25 epochs of training. The results trended towards an 85.71% true positive/true negative detection rate and an area under the curve trending towards 0.95, indicating approximately 95% accuracy in detecting changes on chest X-rays compatible with coronavirus infection. Study limitations include access to only a small dataset and no specificity in the diagnosis. Following a discussion with our programmer, there are areas where modifications in the weighting of the algorithm can be made in order to improve the detection rates. Given the high detection rate of the program, and the potential ease of implementation, this would be effective in assisting staff that is not trained in radiology in detecting otherwise subtle changes that might not be appreciated on imaging. Limitations include the lack of a differential diagnosis and application of the appropriate clinical history, although this may be less of a problem in day-to-day clinical practice. It is nonetheless our belief that implementing this program and widening its scope to detecting multiple pathologies such as lung masses will greatly assist both the radiology department and our colleagues in increasing workflow and detection rate.Keywords: artificial intelligence, COVID, neural network, machine learning
Procedia PDF Downloads 931419 A Thorough Analysis on The Dialog Application Replika
Authors: Weeam Abdulrahman, Gawaher Al-Madwary, Fatima Al-Ammari, Razan Mohammad
Abstract:
This research discusses the AI features in Replika which is a dialog with a customized characters application, interaction and communication with AI in different ways that is provided for the user. spreading a survey with questions on how the AI worked is one approach of exposing the app to others to utilize and also we made an analysis that provides us with the conclusion of our research as a result, individuals will be able to try out the app. In the methodology we explain each page that pops up in the screen while using replika and Specify each part and icon.Keywords: Replika, AI, artificial intelligence, dialog app
Procedia PDF Downloads 1771418 Artificial Intelligence in Management Simulators
Authors: Nuno Biga
Abstract:
Artificial Intelligence (AI) has the potential to transform management into several impactful ways. It allows machines to interpret information to find patterns in big data and learn from context analysis, optimize operations, make predictions sensitive to each specific situation and support data-driven decision making. The introduction of an 'artificial brain' in organization also enables learning through complex information and data provided by those who train it, namely its users. The "Assisted-BIGAMES" version of the Accident & Emergency (A&E) simulator introduces the concept of a "Virtual Assistant" (VA) sensitive to context, that provides users useful suggestions to pursue the following operations such as: a) to relocate workstations in order to shorten travelled distances and minimize the stress of those involved; b) to identify in real time existing bottleneck(s) in the operations system so that it is possible to quickly act upon them; c) to identify resources that should be polyvalent so that the system can be more efficient; d) to identify in which specific processes it may be advantageous to establish partnership with other teams; and e) to assess possible solutions based on the suggested KPIs allowing action monitoring to guide the (re)definition of future strategies. This paper is built on the BIGAMES© simulator and presents the conceptual AI model developed and demonstrated through a pilot project (BIG-AI). Each Virtual Assisted BIGAME is a management simulator developed by the author that guides operational and strategic decision making, providing users with useful information in the form of management recommendations that make it possible to predict the actual outcome of different alternative management strategic actions. The pilot project developed incorporates results from 12 editions of the BIGAME A&E that took place between 2017 and 2022 at AESE Business School, based on the compilation of data that allows establishing causal relationships between decisions taken and results obtained. The systemic analysis and interpretation of data is powered in the Assisted-BIGAMES through a computer application called "BIGAMES Virtual Assistant" (VA) that players can use during the Game. Each participant in the VA permanently asks himself about the decisions he should make during the game to win the competition. To this end, the role of the VA of each team consists in guiding the players to be more effective in their decision making, through presenting recommendations based on AI methods. It is important to note that the VA's suggestions for action can be accepted or rejected by the managers of each team, as they gain a better understanding of the issues along time, reflect on good practice and rely on their own experience, capability and knowledge to support their own decisions. Preliminary results show that the introduction of the VA provides a faster learning of the decision-making process. The facilitator designated as “Serious Game Controller” (SGC) is responsible for supporting the players with further analysis. The recommended actions by the SGC may differ or be similar to the ones previously provided by the VA, ensuring a higher degree of robustness in decision-making. Additionally, all the information should be jointly analyzed and assessed by each player, who are expected to add “Emotional Intelligence”, an essential component absent from the machine learning process.Keywords: artificial intelligence, gamification, key performance indicators, machine learning, management simulators, serious games, virtual assistant
Procedia PDF Downloads 1041417 Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues
Authors: Barna Arnold Keserű
Abstract:
In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.Keywords: artificial intelligence, intellectual property, liability, robotics
Procedia PDF Downloads 2031416 Awareness among Medical Students and Faculty about Integration of Artifical Intelligence Literacy in Medical Curriculum
Authors: Fatima Faraz
Abstract:
BACKGROUND: While Artificial intelligence (AI) provides new opportunities across a wide variety of industries, healthcare is no exception. AI can lead to advancements in how the healthcare system functions and improves the quality of patient care. Developing countries like Pakistan are lagging in the implementation of AI-based solutions in healthcare. This demands increased knowledge and AI literacy among health care professionals. OBJECTIVES: To assess the level of awareness among medical students and faculty about AI in preparation for teaching AI basics and data science applications in clinical practice in an integrated medical curriculum. METHODS: An online 15-question semi-structured questionnaire, previously tested and validated, was delivered among participants through convenience sampling. The questionnaire composed of 3 parts: participant’s background knowledge, AI awareness, and attitudes toward AI applications in medicine. RESULTS: A total of 182 students and 39 faculty members from Rawalpindi Medical University, Pakistan, participated in the study. Only 26% of students and 46.2% of faculty members responded that they were aware of AI topics in clinical medicine. The major source of AI knowledge was social media (35.7%) for students and professional talks and colleagues (43.6%) for faculty members. 23.5% of participants answered that they personally had a basic understanding of AI. Students and faculty (60.1%) were interested in AI in patient care and teaching domain. These findings parallel similar published AI survey results. CONCLUSION: This survey concludes interest among students and faculty in AI developments and technology applications in healthcare. Further studies are required in order to correctly fit AI in the integrated modular curriculum of medical education.Keywords: medical education, data science, artificial intelligence, curriculum
Procedia PDF Downloads 1011415 Evaluation of Short-Term Load Forecasting Techniques Applied for Smart Micro-Grids
Authors: Xiaolei Hu, Enrico Ferrera, Riccardo Tomasi, Claudio Pastrone
Abstract:
Load Forecasting plays a key role in making today's and future's Smart Energy Grids sustainable and reliable. Accurate power consumption prediction allows utilities to organize in advance their resources or to execute Demand Response strategies more effectively, which enables several features such as higher sustainability, better quality of service, and affordable electricity tariffs. It is easy yet effective to apply Load Forecasting at larger geographic scale, i.e. Smart Micro Grids, wherein the lower available grid flexibility makes accurate prediction more critical in Demand Response applications. This paper analyses the application of short-term load forecasting in a concrete scenario, proposed within the EU-funded GreenCom project, which collect load data from single loads and households belonging to a Smart Micro Grid. Three short-term load forecasting techniques, i.e. linear regression, artificial neural networks, and radial basis function network, are considered, compared, and evaluated through absolute forecast errors and training time. The influence of weather conditions in Load Forecasting is also evaluated. A new definition of Gain is introduced in this paper, which innovatively serves as an indicator of short-term prediction capabilities of time spam consistency. Two models, 24- and 1-hour-ahead forecasting, are built to comprehensively compare these three techniques.Keywords: short-term load forecasting, smart micro grid, linear regression, artificial neural networks, radial basis function network, gain
Procedia PDF Downloads 4681414 Study of the Persian Gulf’s and Oman Sea’s Numerical Tidal Currents
Authors: Fatemeh Sadat Sharifi
Abstract:
In this research, a barotropic model was employed to consider the tidal studies in the Persian Gulf and Oman Sea, where the only sufficient force was the tidal force. To do that, a finite-difference, free-surface model called Regional Ocean Modeling System (ROMS), was employed on the data over the Persian Gulf and Oman Sea. To analyze flow patterns of the region, the results of limited size model of The Finite Volume Community Ocean Model (FVCOM) were appropriated. The two points were determined since both are one of the most critical water body in case of the economy, biology, fishery, Shipping, navigation, and petroleum extraction. The OSU Tidal Prediction Software (OTPS) tide and observation data validated the modeled result. Next, tidal elevation and speed, and tidal analysis were interpreted. Preliminary results determine a significant accuracy in the tidal height compared with observation and OTPS data, declaring that tidal currents are highest in Hormuz Strait and the narrow and shallow region between Iranian coasts and Islands. Furthermore, tidal analysis clarifies that the M_2 component has the most significant value. Finally, the Persian Gulf tidal currents are divided into two branches: the first branch converts from south to Qatar and via United Arab Emirate rotates to Hormuz Strait. The secondary branch, in north and west, extends up to the highest point in the Persian Gulf and in the head of Gulf turns counterclockwise.Keywords: numerical model, barotropic tide, tidal currents, OSU tidal prediction software, OTPS
Procedia PDF Downloads 1311413 Aerodynamic Heating and Drag Reduction of Pegasus-XL Satellite Launch Vehicle
Authors: Syed Muhammad Awais Tahir, Syed Hossein Raza Hamdani
Abstract:
In the last two years, there has been a substantial increase in the rate of satellite launches. To keep up with the technology, it is imperative that the launch cost must be made affordable, especially in developing and underdeveloped countries. Launch cost is directly affected by the launch vehicle’s aerodynamic performance. Pegasus-XL SLV (Satellite Launch Vehicle) has been serving as a commercial SLV for the last 26 years, commencing its commercial flight operation from the six operational sites all around the US and Europe, and the Marshal Islands. Aerodynamic heating and drag contribute largely to Pegasus’s flight performance. The objective of this study is to reduce the aerodynamic heating and drag on Pegasus’s body significantly for supersonic and hypersonic flight regimes. Aerodynamic data for Pegasus’s first flight has been validated through CFD (Computational Fluid Dynamics), and then drag and aerodynamic heating is reduced by using a combination of a forward-facing cylindrical spike and a conical aero-disk at the actual operational flight conditions. CFD analysis using ANSYS fluent will be carried out for Mach no. ranges from 0.83 to 7.8, and AoA (Angle of Attack) ranges from -4 to +24 degrees for both simple and spiked-configuration, and then the comparison will be drawn using a variety of graphs and contours. Expected drag reduction for supersonic flight is to be around 15% to 25%, and for hypersonic flight is to be around 30% to 50%, especially for AoA < 15⁰. A 5% to 10% reduction in aerodynamic heating is expected to be achieved for hypersonic regions. In conclusion, the aerodynamic performance of air-launched Pegasus-XL SLV can be further enhanced, leading to its optimal fuel usage to achieve a more economical orbital flight.Keywords: aerodynamics, pegasus-XL, drag reduction, aerodynamic heating, satellite launch vehicle, SLV, spike, aero-disk
Procedia PDF Downloads 1051412 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution
Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino
Abstract:
This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization
Procedia PDF Downloads 1361411 Ethical Artificial Intelligence: An Exploratory Study of Guidelines
Authors: Ahmad Haidar
Abstract:
The rapid adoption of Artificial Intelligence (AI) technology holds unforeseen risks like privacy violation, unemployment, and algorithmic bias, triggering research institutions, governments, and companies to develop principles of AI ethics. The extensive and diverse literature on AI lacks an analysis of the evolution of principles developed in recent years. There are two fundamental purposes of this paper. The first is to provide insights into how the principles of AI ethics have been changed recently, including concepts like risk management and public participation. In doing so, a NOISE (Needs, Opportunities, Improvements, Strengths, & Exceptions) analysis will be presented. Second, offering a framework for building Ethical AI linked to sustainability. This research adopts an explorative approach, more specifically, an inductive approach to address the theoretical gap. Consequently, this paper tracks the different efforts to have “trustworthy AI” and “ethical AI,” concluding a list of 12 documents released from 2017 to 2022. The analysis of this list unifies the different approaches toward trustworthy AI in two steps. First, splitting the principles into two categories, technical and net benefit, and second, testing the frequency of each principle, providing the different technical principles that may be useful for stakeholders considering the lifecycle of AI, or what is known as sustainable AI. Sustainable AI is the third wave of AI ethics and a movement to drive change throughout the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, and governance) in the direction of greater ecological integrity and social fairness. In this vein, results suggest transparency, privacy, fairness, safety, autonomy, and accountability as recommended technical principles to include in the lifecycle of AI. Another contribution is to capture the different basis that aid the process of AI for sustainability (e.g., towards sustainable development goals). The results indicate data governance, do no harm, human well-being, and risk management as crucial AI for sustainability principles. This study’s last contribution clarifies how the principles evolved. To illustrate, in 2018, the Montreal declaration mentioned eight principles well-being, autonomy, privacy, solidarity, democratic participation, equity, and diversity. In 2021, notions emerged from the European Commission proposal, including public trust, public participation, scientific integrity, risk assessment, flexibility, benefit and cost, and interagency coordination. The study design will strengthen the validity of previous studies. Yet, we advance knowledge in trustworthy AI by considering recent documents, linking principles with sustainable AI and AI for sustainability, and shedding light on the evolution of guidelines over time.Keywords: artificial intelligence, AI for sustainability, declarations, framework, regulations, risks, sustainable AI
Procedia PDF Downloads 931410 Recent Developments in the Application of Deep Learning to Stock Market Prediction
Authors: Shraddha Jain Sharma, Ratnalata Gupta
Abstract:
Predicting stock movements in the financial market is both difficult and rewarding. Analysts and academics are increasingly using advanced approaches such as machine learning techniques to anticipate stock price patterns, thanks to the expanding capacity of computing and the recent advent of graphics processing units and tensor processing units. Stock market prediction is a type of time series prediction that is incredibly difficult to do since stock prices are influenced by a variety of financial, socioeconomic, and political factors. Furthermore, even minor mistakes in stock market price forecasts can result in significant losses for companies that employ the findings of stock market price prediction for financial analysis and investment. Soft computing techniques are increasingly being employed for stock market prediction due to their better accuracy than traditional statistical methodologies. The proposed research looks at the need for soft computing techniques in stock market prediction, the numerous soft computing approaches that are important to the field, past work in the area with their prominent features, and the significant problems or issue domain that the area involves. For constructing a predictive model, the major focus is on neural networks and fuzzy logic. The stock market is extremely unpredictable, and it is unquestionably tough to correctly predict based on certain characteristics. This study provides a complete overview of the numerous strategies investigated for high accuracy prediction, with a focus on the most important characteristics.Keywords: stock market prediction, artificial intelligence, artificial neural networks, fuzzy logic, accuracy, deep learning, machine learning, stock price, trading volume
Procedia PDF Downloads 901409 Web and Smart Phone-based Platform Combining Artificial Intelligence and Satellite Remote Sensing Data to Geoenable Villages for Crop Health Monitoring
Authors: Siddhartha Khare, Nitish Kr Boro, Omm Animesh Mishra
Abstract:
Recent food price hikes may signal the end of an era of predictable global grain crop plenty due to climate change, population expansion, and dietary changes. Food consumption will treble in 20 years, requiring enormous production expenditures. Climate and the atmosphere changed owing to rainfall and seasonal cycles in the past decade. India's tropical agricultural relies on evapotranspiration and monsoons. In places with limited resources, the global environmental change affects agricultural productivity and farmers' capacity to adjust to changing moisture patterns. Motivated by these difficulties, satellite remote sensing might be combined with near-surface imaging data (smartphones, UAVs, and PhenoCams) to enable phenological monitoring and fast evaluations of field-level consequences of extreme weather events on smallholder agriculture output. To accomplish this technique, we must digitally map all communities agricultural boundaries and crop kinds. With the improvement of satellite remote sensing technologies, a geo-referenced database may be created for rural Indian agriculture fields. Using AI, we can design digital agricultural solutions for individual farms. Main objective is to Geo-enable each farm along with their seasonal crop information by combining Artificial Intelligence (AI) with satellite and near-surface data and then prepare long term crop monitoring through in-depth field analysis and scanning of fields with satellite derived vegetation indices. We developed an AI based algorithm to understand the timelapse based growth of vegetation using PhenoCam or Smartphone based images. We developed an android platform where user can collect images of their fields based on the android application. These images will be sent to our local server, and then further AI based processing will be done at our server. We are creating digital boundaries of individual farms and connecting these farms with our smart phone application to collect information about farmers and their crops in each season. We are extracting satellite-based information for each farm from Google earth engine APIs and merging this data with our data of tested crops from our app according to their farm’s locations and create a database which will provide the data of quality of crops from their location.Keywords: artificial intelligence, satellite remote sensing, crop monitoring, android and web application
Procedia PDF Downloads 1001408 Optimal Uses of Rainwater to Maintain Water Level in Gomti Nagar, Uttar Pradesh, India
Authors: Alok Saini, Rajkumar Ghosh
Abstract:
Water is nature's important resource for survival of all living things, but freshwater scarcity exists in some parts of world. This study has predicted that Gomti Nagar area (49.2 sq. km.) will harvest about 91110 ML of rainwater till 2051 (assuming constant and present annual rainfall). But 17.71 ML of rainwater was harvested from only 53 buildings in Gomti Nagar area in the year 2021. Water level will be increased (rise) by 13 cm in Gomti Nagar from such groundwater recharge. The total annual groundwater abstraction from Gomti Nagar area was 35332 ML (in 2021). Due to hydrogeological constraints and lower annual rainfall, groundwater recharge is less than groundwater abstraction. The recent scenario is only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. Gomti Nagar is situated in 'Zone–A' (water distribution area) and groundwater is the primary source of freshwater supply. Current scenario indicates only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. In Gomti Nagar, the difference between groundwater abstraction and recharge will be 735570 ML in 30 yrs. Statistically, all buildings at Gomti Nagar (new and renovated) could harvest 3037 ML of rainwater through RTRWHs annually. The most recent monsoonal recharge in Gomti Nagar was 10813 ML/yr. Harvested rainwater collected from RTRWHs can be used for rooftop irrigation, and residential kitchen and gardens (home grown fruit and vegetables). According to bylaws, RTRWH installations are required in both newly constructed and existing buildings plot areas of 300 sq. m or above. Harvested rainwater is of higher quality than contaminated groundwater. Harvested rainwater from RTRWHs can be considered water self-sufficient. Rooftop Rainwater Harvesting Systems (RTRWHs) are least expensive, eco-friendly, most sustainable, and alternative water resource for artificial recharge. This study also predicts about 3.9 m of water level rise in Gomti Nagar area till 2051, only when all buildings will install RTRWHs and harvest for groundwater recharging. As a result, this current study responds to an impact assessment study of RTRWHs implementation for the water scarcity problem in the Gomti Nagar area (1.36 sq.km.). This study suggests that common storage tanks (recharge wells) should be built for a group of at least ten (10) households and optimal amount of harvested rainwater will be stored annually. Artificial recharge from alternative water sources will be required to improve the declining water level trend and balance the groundwater table in this area. This over-exploitation of groundwater may lead to land subsidence, and development of vertical cracks.Keywords: aquifer, aquitard, artificial recharge, bylaws, groundwater, monsoon, rainfall, rooftop rainwater harvesting system, RTRWHs water table, water level
Procedia PDF Downloads 97