Search results for: delay tolerant networks
447 AI for Efficient Geothermal Exploration and Utilization
Authors: Velimir Monty Vesselinov, Trais Kliplhuis, Hope Jasperson
Abstract:
Artificial intelligence (AI) is a powerful tool in the geothermal energy sector, aiding in both exploration and utilization. Identifying promising geothermal sites can be challenging due to limited surface indicators and the need for expensive drilling to confirm subsurface resources. Geothermal reservoirs can be located deep underground and exhibit complex geological structures, making traditional exploration methods time-consuming and imprecise. AI algorithms can analyze vast datasets of geological, geophysical, and remote sensing data, including satellite imagery, seismic surveys, geochemistry, geology, etc. Machine learning algorithms can identify subtle patterns and relationships within this data, potentially revealing hidden geothermal potential in areas previously overlooked. To address these challenges, a SIML (Science-Informed Machine Learning) technology has been developed. SIML methods are different from traditional ML techniques. In both cases, the ML models are trained to predict the spatial distribution of an output (e.g., pressure, temperature, heat flux) based on a series of inputs (e.g., permeability, porosity, etc.). The traditional ML (a) relies on deep and wide neural networks (NNs) based on simple algebraic mappings to represent complex processes. In contrast, the SIML neurons incorporate complex mappings (including constitutive relationships and physics/chemistry models). This results in ML models that have a physical meaning and satisfy physics laws and constraints. The prototype of the developed software, called GeoTGO, is accessible through the cloud. Our software prototype demonstrates how different data sources can be made available for processing, executed demonstrative SIML analyses, and presents the results in a table and graphic form.Keywords: science-informed machine learning, artificial inteligence, exploration, utilization, hidden geothermal
Procedia PDF Downloads 56446 Developing Family-Based Eco-Citizenship with Social Media: A Mixed Methods Collective Case Study of Families Looking to Adopt Ecologically Responsible Actions Using Facebook
Authors: Michel T. Leger, Shawn Martin
Abstract:
Leading an ecologically responsible lifestyle represents a difficult challenge. Though research in environmental education does point to an increase in the intention to act more responsibly towards the environment, this intent does not seem to translate to concrete ecological action. This mixed methods collective case study explores the adoption of ecological actions in the family, a context of socio-ecological transformation rarely examined in the scientific literature. More specifically, it takes into account the popular use of social media today to explore the potential role social media, namely Facebook, in promoting environmental action. In other words, for families who are intent on adopting an ecologically friendly lifestyle, could the use of Facebook positively affect the way family members relate to the environment and bring about real change in their daily household actions? To answer this question, twenty-one families living in an urban setting were recruited and then divided them into two distinct groups. The first group of families attempted to lower their household electrical bill as part of a private Facebook group, while the other aimed to do the same, but without the directed use of social media. For both groups, we recorded the amount of kilowatt-hours used during the project as well as the amount used for the same months the previous year, adjusting for temperature variations. Exit interviews were also conducted with each family in order to try to understand the processes of eco-citizenship development in the context of family. Results seem to suggest that both virtual social networks and one-on-one support can help to increase environmental awareness in participating family. Interestingly, families from the Facebook group seemed to demonstrate a higher degree of environmental engagement, and younger family members in this group were more active in the processes of collective behavioral change.Keywords: environmental education, family-based eco-citizenship, social media, case study
Procedia PDF Downloads 153445 Use of Transportation Networks to Optimize The Profit Dynamics of the Product Distribution
Authors: S. Jayasinghe, R. B. N. Dissanayake
Abstract:
Optimization modelling together with the Network models and Linear Programming techniques is a powerful tool in problem solving and decision making in real world applications. This study developed a mathematical model to optimize the net profit by minimizing the transportation cost. This model focuses the transportation among decentralized production plants to a centralized distribution centre and then the distribution among island wide agencies considering the customer satisfaction as a requirement. This company produces basically 9 types of food items with 82 different varieties and 4 types of non-food items with 34 different varieties. Among 6 production plants, 4 were located near the city of Mawanella and the other 2 were located in Galewala and Anuradhapura cities which are 80 km and 150 km away from Mawanella respectively. The warehouse located in the Mawanella was the main production plant and also the only distribution plant. This plant distributes manufactured products to 39 agencies island-wide. The average values and average amount of the goods for 6 consecutive months from May 2013 to October 2013 were collected and then average demand values were calculated. The following constraints are used as the necessary requirement to satisfy the optimum condition of the model; there was one source, 39 destinations and supply and demand for all the agencies are equal. Using transport cost for a kilometer, total transport cost was calculated. Then the model was formulated using distance and flow of the distribution. Network optimization and linear programming techniques were used to originate the model while excel solver is used in solving. Results showed that company requires total transport cost of Rs. 146, 943, 034.50 to fulfil the customers’ requirement for a month. This is very much less when compared with data without using the model. Model also proved that company can reduce their transportation cost by 6% when distributing to island-wide customers. Company generally satisfies their customers’ requirements by 85%. This satisfaction can be increased up to 97% by using this model. Therefore this model can be used by other similar companies in order to reduce the transportation cost.Keywords: mathematical model, network optimization, linear programming
Procedia PDF Downloads 347444 Use of Social Media in Political Communications: Example of Facebook
Authors: Havva Nur Tarakci, Bahar Urhan Torun
Abstract:
The transformation that is seen in every area of life by technology, especially internet technology changes the structure of political communications too. Internet, which is at the top of new communication technologies, affects political communications with its structure in a way that no traditional communication tools ever have and enables interaction and the channel between receiver and sender, and it becomes one of the most effective tools preferred among the political communication applications. This state as a result of technological convergence makes Internet an unobtainable place for political communication campaigns. Political communications, which means every kind of communication strategies that political parties called 'actors of political communications' use with the aim of messaging their opinions and party programmes to their present and potential voters who are a target group for them, is a type of communication that is frequently used also among social media tools at the present day. The electorate consisting of different structures is informed, directed, and managed by social media tools. Political parties easily reach their electorate by these tools without any limitations of both time and place and also are able to take the opinions and reactions of their electorate by the element of interaction that is a feature of social media. In this context, Facebook, which is a place that political parties use in social media at most, is a communication network including in our daily life since 2004. As it is one of the most popular social networks today, it is among the most-visited websites in the global scale. In this way, the research is based on the question, “How do the political parties use Facebook at the campaigns, which they conduct during the election periods, for informing their voters?” and it aims at clarifying the Facebook using practices of the political parties. In direction of this objective the official Facebook accounts of the four political parties (JDP–AKParti, PDP–BDP, RPP-CHP, NMP-MHP), which reach their voters by social media besides other communication tools, are treated, and a frame for the politics of Turkey is formed. The time of examination is constricted with totally two weeks, one week before the mayoral elections and one week after the mayoral elections, when it is supposed that the political parties use their Facebook accounts in full swing. As a research method, the method of content analysis is preferred, and the texts and the visual elements that are gotten are interpreted based on this analysis.Keywords: Facebook, political communications, social media, electrorate
Procedia PDF Downloads 384443 The Layout Analysis of Handwriting Characters and the Fusion of Multi-style Ancient Books’ Background
Authors: Yaolin Tian, Shanxiong Chen, Fujia Zhao, Xiaoyu Lin, Hailing Xiong
Abstract:
Ancient books are significant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufficient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overfitting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the field make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversified samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different fine-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientifically, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.Keywords: deep learning, image fusion, image generation, layout analysis
Procedia PDF Downloads 160442 Leveraging Automated and Connected Vehicles with Deep Learning for Smart Transportation Network Optimization
Authors: Taha Benarbia
Abstract:
The advent of automated and connected vehicles has revolutionized the transportation industry, presenting new opportunities for enhancing the efficiency, safety, and sustainability of our transportation networks. This paper explores the integration of automated and connected vehicles into a smart transportation framework, leveraging the power of deep learning techniques to optimize the overall network performance. The first aspect addressed in this paper is the deployment of automated vehicles (AVs) within the transportation system. AVs offer numerous advantages, such as reduced congestion, improved fuel efficiency, and increased safety through advanced sensing and decisionmaking capabilities. The paper delves into the technical aspects of AVs, including their perception, planning, and control systems, highlighting the role of deep learning algorithms in enabling intelligent and reliable AV operations. Furthermore, the paper investigates the potential of connected vehicles (CVs) in creating a seamless communication network between vehicles, infrastructure, and traffic management systems. By harnessing real-time data exchange, CVs enable proactive traffic management, adaptive signal control, and effective route planning. Deep learning techniques play a pivotal role in extracting meaningful insights from the vast amount of data generated by CVs, empowering transportation authorities to make informed decisions for optimizing network performance. The integration of deep learning with automated and connected vehicles paves the way for advanced transportation network optimization. Deep learning algorithms can analyze complex transportation data, including traffic patterns, demand forecasting, and dynamic congestion scenarios, to optimize routing, reduce travel times, and enhance overall system efficiency. The paper presents case studies and simulations demonstrating the effectiveness of deep learning-based approaches in achieving significant improvements in network performance metricsKeywords: automated vehicles, connected vehicles, deep learning, smart transportation network
Procedia PDF Downloads 82441 Real Time Detection of Application Layer DDos Attack Using Log Based Collaborative Intrusion Detection System
Authors: Farheen Tabassum, Shoab Ahmed Khan
Abstract:
The brutality of attacks on networks and decisive infrastructures are on the climb over recent years and appears to continue to do so. Distributed Denial of service attack is the most prevalent and easy attack on the availability of a service due to the easy availability of large botnet computers at cheap price and the general lack of protection against these attacks. Application layer DDoS attack is DDoS attack that is targeted on wed server, application server or database server. These types of attacks are much more sophisticated and challenging as they get around most conventional network security devices because attack traffic often impersonate normal traffic and cannot be recognized by network layer anomalies. Conventional techniques of single-hosted security systems are becoming gradually less effective in the face of such complicated and synchronized multi-front attacks. In order to protect from such attacks and intrusion, corporation among all network devices is essential. To overcome this issue, a collaborative intrusion detection system (CIDS) is proposed in which multiple network devices share valuable information to identify attacks, as a single device might not be capable to sense any malevolent action on its own. So it helps us to take decision after analyzing the information collected from different sources. This novel attack detection technique helps to detect seemingly benign packets that target the availability of the critical infrastructure, and the proposed solution methodology shall enable the incident response teams to detect and react to DDoS attacks at the earliest stage to ensure that the uptime of the service remain unaffected. Experimental evaluation shows that the proposed collaborative detection approach is much more effective and efficient than the previous approaches.Keywords: Distributed Denial-of-Service (DDoS), Collaborative Intrusion Detection System (CIDS), Slowloris, OSSIM (Open Source Security Information Management tool), OSSEC HIDS
Procedia PDF Downloads 355440 Dynamic Web-Based 2D Medical Image Visualization and Processing Software
Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail
Abstract:
In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN
Procedia PDF Downloads 162439 Redox-Mediated Supramolecular Radical Gel
Authors: Sonam Chorol, Sharvan Kumar, Pritam Mukhopadhyay
Abstract:
In biology, supramolecular systems require the use of chemical fuels to stay in sustained nonequilibrium steady states termed dissipative self-assembly in contrast to synthetic self-assembly. Biomimicking these natural dynamic systems, some studies have demonstrated artificial self-assembly under nonequilibrium utilizing various forms of energies (fuel) such as chemical, redox, and pH. Naphthalene diimides (NDIs) are well-known organic molecules in supramolecular architectures with high electron affinity and have applications in controlled electron transfer (ET) reactions, etc. Herein, we report the endergonic ET from tetraphenylborate to highly electron-deficient phosphonium NDI²+ dication to generate NDI•+ radical. The formation of radicals was confirmed by UV-Vis-NIR absorption spectroscopy. Electron-donor and electron-acceptor energy levels were calculated from experimental electrochemistry and theoretical DFT analysis. The HOMO of the electron donor locates below the LUMO of the electro-acceptor. This indicates that electron transfer is endergonic (ΔE°ET = negative). The endergonic ET from NaBPh₄ to NDI²+ dication was achieved thermodynamically by the formation of coupled biphenyl product confirmed by GC-MS analysis. NDI molecule bearing octyl phosphonium at the core and H-bond forming imide moieties at the axial position forms a gel. The rheological properties of purified radical ion NDI⦁+ gels were evaluated. The atomic force microscopy studies reveal the formation of large branching-type networks with a maximum height of 70-80 nm. The endergonic ET from NaBPh₄ to NDI²+ dication was used to design the assembly and disassembly redox reaction cycle using reducing (NaBPh₄) and oxidizing agents (Br₂) as chemical fuels. A part of NaBPh₄ is used to drive assembly, while a fraction of the NaBPh₄ is dissipated by forming a useful product. The system goes back to the disassembled NDI²+ dication state with the addition of Br₂. We think bioinspired dissipative self-assembly is the best approach to developing future lifelike materials with autonomous behavior.Keywords: Ionic-gel, redox-cycle, self-assembly, useful product
Procedia PDF Downloads 86438 Radar Fault Diagnosis Strategy Based on Deep Learning
Authors: Bin Feng, Zhulin Zong
Abstract:
Radar systems are critical in the modern military, aviation, and maritime operations, and their proper functioning is essential for the success of these operations. However, due to the complexity and sensitivity of radar systems, they are susceptible to various faults that can significantly affect their performance. Traditional radar fault diagnosis strategies rely on expert knowledge and rule-based approaches, which are often limited in effectiveness and require a lot of time and resources. Deep learning has recently emerged as a promising approach for fault diagnosis due to its ability to learn features and patterns from large amounts of data automatically. In this paper, we propose a radar fault diagnosis strategy based on deep learning that can accurately identify and classify faults in radar systems. Our approach uses convolutional neural networks (CNN) to extract features from radar signals and fault classify the features. The proposed strategy is trained and validated on a dataset of measured radar signals with various types of faults. The results show that it achieves high accuracy in fault diagnosis. To further evaluate the effectiveness of the proposed strategy, we compare it with traditional rule-based approaches and other machine learning-based methods, including decision trees, support vector machines (SVMs), and random forests. The results demonstrate that our deep learning-based approach outperforms the traditional approaches in terms of accuracy and efficiency. Finally, we discuss the potential applications and limitations of the proposed strategy, as well as future research directions. Our study highlights the importance and potential of deep learning for radar fault diagnosis. It suggests that it can be a valuable tool for improving the performance and reliability of radar systems. In summary, this paper presents a radar fault diagnosis strategy based on deep learning that achieves high accuracy and efficiency in identifying and classifying faults in radar systems. The proposed strategy has significant potential for practical applications and can pave the way for further research.Keywords: radar system, fault diagnosis, deep learning, radar fault
Procedia PDF Downloads 92437 [Keynote Speech]: Curiosity, Innovation and Technological Advancements Shaping the Future of Science, Technology, Engineering and Mathematics Education
Authors: Ana Hol
Abstract:
We live in a constantly changing environment where technology has become an integral component of our day to day life. We rely heavily on mobile devices, we search for data via web, we utilise smart home sensors to create the most suited ambiences and we utilise applications to shop, research, communicate and share data. Heavy reliance on technology therefore is creating new connections between STEM (Science, Technology, Engineering and Mathematics) fields which in turn rises a question of what the STEM education of the future should be like? This study was based on the reviews of the six Australian Information Systems students who undertook an international study tour to India where they were given an opportunity to network, communicate and meet local students, staff and business representatives and from them learn about the local business implementations, local customs and regulations. Research identifies that if we are to continue to implement and utilise electronic devices on the global scale, such as for example implement smart cars that can smoothly cross borders, we will need the workforce that will have the knowledge about the cars themselves, their parts, roads and transport networks, road rules, road sensors, road monitoring technologies, graphical user interfaces, movement detection systems as well as day to day operations, legal rules and regulations of each region and country, insurance policies, policing and processes so that the wide array of sensors can be controlled across country’s borders. In conclusion, it can be noted that allowing students to learn about the local conditions, roads, operations, business processes, customs and values in different countries is giving students a cutting edge advantage as such knowledge cannot be transferred via electronic sources alone. However once understanding of each problem or project is established, multidisciplinary innovative STEM projects can be smoothly conducted.Keywords: STEM, curiosity, innovation, advancements
Procedia PDF Downloads 200436 Spatial and Temporal Evaluations of Disinfection By-Products Formation in Coastal City Distribution Systems of Turkey
Authors: Vedat Uyak
Abstract:
Seasonal variations of trihalomethanes (THMs) and haloacetic acids (HAAs) concentrations were investigated within three distribution systems of a coastal city of Istanbul, Turkey. Moreover, total trihalomethanes and other organics concentration were also analyzed. The investigation was based on an intensive 16 month (2009-2010) sampling program, undertaken during the spring, summer, fall and winter seasons. Four THM (chloroform, dichlorobromomethane, chlorodibromomethane, bromoform), and nine HAA (the most commonly occurring one being dichloroacetic acid (DCAA) and trichloroacetic acid (TCAA); other compounds are monochloroacetic acid (MCAA), monobromoacetic acid (MBAA), dibromoacetic acid (DBAA), tribromoacetic acid (TBAA), bromochloroacetic acid (BCAA), bromodichloroacetic acid (BDCAA) and chlorodibromoacetic acid (CDBAA)) species and other water quality and operational parameters were monitored at points along the distribution system between the treatment plant and the system’s extremity. The effects of coastal water sources, seasonal variation and spatial variation were examined. The results showed that THMs and HAAs concentrations vary significantly between treated waters and water at the distribution networks. When water temperature exceeds 26°C in summer, the THMs and HAAs levels are 0.8 – 1.1, and 0.4 – 0.9 times higher than treated water, respectively. While when water temperature is below 12°C in the winter, the measured THMs and HAAs concentrations at the system’s extremity were very rarely higher than 100 μg/L, and 60 μg/L, respectively. The highest THM concentrations occurred in the Buyukcekmece distribution system, with an average total HAA concentration of 92 μg/L. Moreover, the lowest THM levels were observed in the Omerli distribution network, with a mean concentration of 7 μg/L. For HAA levels, the maximum concentrations again were observed in the Buyukcekmece distribution system, with an average total HAA concentration of 57 μg/l. High spatial and seasonal variation of disinfection by-products in the drinking water of Istanbul was attributed of illegal wastewater discharges to water supplies of Istanbul city.Keywords: disinfection byproducts, drinking water, trihalomethanes, haloacetic acids, seasonal variation
Procedia PDF Downloads 153435 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks
Authors: Wang Yichen, Haruka Yamashita
Abstract:
In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.Keywords: recurrent neural network, players lineup, basketball data, decision making model
Procedia PDF Downloads 134434 Calibration and Validation of ArcSWAT Model for Estimation of Surface Runoff and Sediment Yield from Dhangaon Watershed
Authors: M. P. Tripathi, Priti Tiwari
Abstract:
Soil and Water Assessment Tool (SWAT) is a distributed parameter continuous time model and was tested on daily and fortnightly basis for a small agricultural watershed (Dhangaon) of Chhattisgarh state in India. The SWAT model recently interfaced with ArcGIS and called as ArcSWAT. The watershed and sub-watershed boundaries, drainage networks, slope and texture maps were generated in the environment of ArcGIS of ArcSWAT. Supervised classification method was used for land use/cover classification from satellite imageries of the years 2009 and 2012. Manning's roughness coefficient 'n' for overland flow and channel flow and Fraction of Field Capacity (FFC) were calibrated for monsoon season of the years 2009 and 2010. The model was validated on a daily basis for the years 2011 and 2012 by using the observed daily rainfall and temperature data. Calibration and validation results revealed that the model was predicting the daily surface runoff and sediment yield satisfactorily. Sensitivity analysis showed that the annual sediment yield was inversely proportional to the overland and channel 'n' values whereas; annual runoff and sediment yields were directly proportional to the FFC. The model was also tested (calibrated and validated) for the fortnightly runoff and sediment yield for the year 2009-10 and 2011-12, respectively. Simulated values of fortnightly runoff and sediment yield for the calibration and validation years compared well with their observed counterparts. The calibration and validation results revealed that the ArcSWAT model could be used for identification of critical sub-watershed and for developing management scenarios for the Dhangaon watershed. Further, the model should be tested for simulating the surface runoff and sediment yield using generated rainfall and temperature before applying it for developing the management scenario for the critical or priority sub-watersheds.Keywords: watershed, hydrologic and water quality, ArcSWAT model, remote sensing, GIS, runoff and sediment yield
Procedia PDF Downloads 381433 The Role of Social Influences and Cultural Beliefs on Perceptions of Postpartum Depression among Mexican Origin Mothers in San Diego
Authors: Mireya Mateo Gomez
Abstract:
The purpose of this study was to examine the perceptions first-generation Mexican origin mothers living in San Diego have on postpartum depression (PPD), with a special focus on social influences and cultural beliefs towards those meanings. This study also aimed to examine possible PPD help-seeking behaviors that first-generation Mexican origin mothers can perform. The Health Belief Model (HBM) and Social Ecological Model (SEM) were the guiding theoretical frameworks for this study. Data for this study were collected from three focus groups, four in-depth interviews, and the distribution of an acculturation survey (ARSMA II). There were a total of 15 participants, in which participant’s mean age was 45, and the mean age migrated to the United States being 22. Most participants identified as being married, born in Southern or Western Mexico, and with a strong Mexican identity in relation to the ARSMA survey. Participants identified four salient PPD perceptions corresponding to the interpersonal level of SEM. These four main perceptions were: 1) PPD affecting the identity of motherhood; 2) PPD being a natural part of a mother’s experience but mitigated by networks; 3) PPD being a U.S. phenomenon due to family and community breakdown; and 4) natural remedies as a preferred PPD treatment. In regard to themes relating to help seeking behaviors, participants identified seven being: 1) seeking help from immediate family members; 2) practicing home remedies; 3) seeking help from a medical professional; 4) obtaining help from a clinic or organization; 5) seeking help from God; 6) participating in PPD support groups; and 7) talking to a friend. It was evident in this study that postpartum depression is not a well discussed topic within the Mexican immigrant population. In relation to the role culture and social influences have on PPD perceptions, most participants shared hearing or learning about PPD from their family members or friends. Participants also stated seeking help from family members if diagnosed with PPD and seeking out home remedies. This study as well provides suggestions to increase the awareness of PPD among the Mexican immigrant community.Keywords: cultural beliefs, health belief model, Mexican origin mothers, perceptions, postpartum depression social ecological model
Procedia PDF Downloads 153432 A Modular Solution for Large-Scale Critical Industrial Scheduling Problems with Coupling of Other Optimization Problems
Authors: Ajit Rai, Hamza Deroui, Blandine Vacher, Khwansiri Ninpan, Arthur Aumont, Francesco Vitillo, Robert Plana
Abstract:
Large-scale critical industrial scheduling problems are based on Resource-Constrained Project Scheduling Problems (RCPSP), that necessitate integration with other optimization problems (e.g., vehicle routing, supply chain, or unique industrial ones), thus requiring practical solutions (i.e., modular, computationally efficient with feasible solutions). To the best of our knowledge, the current industrial state of the art is not addressing this holistic problem. We propose an original modular solution that answers the issues exhibited by the delivery of complex projects. With three interlinked entities (project, task, resources) having their constraints, it uses a greedy heuristic with a dynamic cost function for each task with a situational assessment at each time step. It handles large-scale data and can be easily integrated with other optimization problems, already existing industrial tools and unique constraints as required by the use case. The solution has been tested and validated by domain experts on three use cases: outage management in Nuclear Power Plants (NPPs), planning of future NPP maintenance operation, and application in the defense industry on supply chain and factory relocation. In the first use case, the solution, in addition to the resources’ availability and tasks’ logical relationships, also integrates several project-specific constraints for outage management, like, handling of resource incompatibility, updating of tasks priorities, pausing tasks in a specific circumstance, and adjusting dynamic unit of resources. With more than 20,000 tasks and multiple constraints, the solution provides a feasible schedule within 10-15 minutes on a standard computer device. This time-effective simulation corresponds with the nature of the problem and requirements of several scenarios (30-40 simulations) before finalizing the schedules. The second use case is a factory relocation project where production lines must be moved to a new site while ensuring the continuity of their production. This generates the challenge of merging job shop scheduling and the RCPSP with location constraints. Our solution allows the automation of the production tasks while considering the rate expectation. The simulation algorithm manages the use and movement of resources and products to respect a given relocation scenario. The last use case establishes a future maintenance operation in an NPP. The project contains complex and hard constraints, like on Finish-Start precedence relationship (i.e., successor tasks have to start immediately after predecessors while respecting all constraints), shareable coactivity for managing workspaces, and requirements of a specific state of "cyclic" resources (they can have multiple states possible with only one at a time) to perform tasks (can require unique combinations of several cyclic resources). Our solution satisfies the requirement of minimization of the state changes of cyclic resources coupled with the makespan minimization. It offers a solution of 80 cyclic resources with 50 incompatibilities between levels in less than a minute. Conclusively, we propose a fast and feasible modular approach to various industrial scheduling problems that were validated by domain experts and compatible with existing industrial tools. This approach can be further enhanced by the use of machine learning techniques on historically repeated tasks to gain further insights for delay risk mitigation measures.Keywords: deterministic scheduling, optimization coupling, modular scheduling, RCPSP
Procedia PDF Downloads 201431 Amrita Bose-Einstein Condensate Solution Formed by Gold Nanoparticles Laser Fusion and Atmospheric Water Generation
Authors: Montree Bunruanses, Preecha Yupapin
Abstract:
In this work, the quantum material called Amrita (elixir) is made from top-down gold into nanometer particles by fusing 99% gold with a laser and mixing it with drinking water using the atmospheric water (AWG) production system, which is made of water with air. The high energy laser power destroyed the four natural force bindings from gravity-weak-electromagnetic and strong coupling forces, where finally it was the purified Bose-Einstein condensate (BEC) states. With this method, gold atoms in the form of spherical single crystals with a diameter of 30-50 nanometers are obtained and used. They were modulated (activated) with a frequency generator into various matrix structures mixed with AWG water to be used in the upstream conversion (quantum reversible) process, which can be applied on humans both internally or externally by drinking or applying on the treated surfaces. Doing both space (body) and time (mind) will go back to the origin and start again from the coupling of space-time on both sides of time at fusion (strong coupling force) and push out (Big Bang) at the equilibrium point (singularity) occurs as strings and DNA with neutrinos as coupling energy. There is no distortion (purification), which is the point where time and space have not yet been determined, and there is infinite energy. Therefore, the upstream conversion is performed. It is reforming DNA to make it be purified. The use of Amrita is a method used for people who cannot meditate (quantum meditation). Various cases were applied, where the results show that the Amrita can make the body and the mind return to their pure origins and begin the downstream process with the Big Bang movement, quantum communication in all dimensions, DNA reformation, frequency filtering, crystal body forming, broadband quantum communication networks, black hole forming, quantum consciousness, body and mind healing, etc.Keywords: quantum materials, quantum meditation, quantum reversible, Bose-Einstein condensate
Procedia PDF Downloads 78430 Image Processing-Based Maize Disease Detection Using Mobile Application
Authors: Nathenal Thomas
Abstract:
In the food chain and in many other agricultural products, corn, also known as maize, which goes by the scientific name Zea mays subsp, is a widely produced agricultural product. Corn has the highest adaptability. It comes in many different types, is employed in many different industrial processes, and is more adaptable to different agro-climatic situations. In Ethiopia, maize is among the most widely grown crop. Small-scale corn farming may be a household's only source of food in developing nations like Ethiopia. The aforementioned data demonstrates that the country's requirement for this crop is excessively high, and conversely, the crop's productivity is very low for a variety of reasons. The most damaging disease that greatly contributes to this imbalance between the crop's supply and demand is the corn disease. The failure to diagnose diseases in maize plant until they are too late is one of the most important factors influencing crop output in Ethiopia. This study will aid in the early detection of such diseases and support farmers during the cultivation process, directly affecting the amount of maize produced. The diseases in maize plants, such as northern leaf blight and cercospora leaf spot, have distinct symptoms that are visible. This study aims to detect the most frequent and degrading maize diseases using the most efficiently used subset of machine learning technology, deep learning so, called Image Processing. Deep learning uses networks that can be trained from unlabeled data without supervision (unsupervised). It is a feature that simulates the exercises the human brain goes through when digesting data. Its applications include speech recognition, language translation, object classification, and decision-making. Convolutional Neural Network (CNN) for Image Processing, also known as convent, is a deep learning class that is widely used for image classification, image detection, face recognition, and other problems. it will also use this algorithm as the state-of-the-art for my research to detect maize diseases by photographing maize leaves using a mobile phone.Keywords: CNN, zea mays subsp, leaf blight, cercospora leaf spot
Procedia PDF Downloads 75429 Environmental Performance Measurement for Network-Level Pavement Management
Authors: Jessica Achebe, Susan Tighe
Abstract:
The recent Canadian infrastructure report card reveals the unhealthy state of municipal infrastructure intensified challenged faced by municipalities to maintain adequate infrastructure performance thresholds and meet user’s required service levels. For a road agency, huge funding gap issue is inflated by growing concerns of the environmental repercussion of road construction, operation and maintenance activities. As the reduction of material consumption and greenhouse gas emission when maintain and rehabilitating road networks can achieve added benefits including improved life cycle performance of pavements, reduced climate change impacts and human health effect due to less air pollution, improved productivity due to optimal allocation of resources and reduced road user cost. Incorporating environmental sustainability measure into pavement management is solution widely cited and studied. However measuring the environmental performance of road network is still a far-fetched practice in road network management, more so an ostensive agency-wide environmental sustainability or sustainable maintenance specifications is missing. To address this challenge, this present research focuses on the environmental sustainability performance of network-level pavement management. The ultimate goal is to develop a framework to incorporate environmental sustainability in pavement management systems for network-level maintenance programming. In order to achieve this goal, this study reviewed previous studies that employed environmental performance measures, as well as the suitability of environmental performance indicators for the evaluation of the sustainability of network-level pavement maintenance strategies. Through an industry practice survey, this paper provides a brief forward regarding the pavement manager motivations and barriers to making more sustainable decisions, and data needed to support the network-level environmental sustainability. The trends in network-level sustainable pavement management are also presented, existing gaps are highlighted, and ideas are proposed for sustainable network-level pavement management.Keywords: pavement management, sustainability, network-level evaluation, environment measures
Procedia PDF Downloads 212428 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions
Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez
Abstract:
In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval
Procedia PDF Downloads 234427 A Case Study: Social Network Analysis of Construction Design Teams
Authors: Elif D. Oguz Erkal, David Krackhardt, Erica Cochran-Hameen
Abstract:
Even though social network analysis (SNA) is an abundantly studied concept for many organizations and industries, a clear SNA approach to the project teams has not yet been adopted by the construction industry. The main challenges for performing SNA in construction and the apparent reason for this gap is the unique and complex structure of each construction project, the comparatively high circulation of project team members/contributing parties and the variety of authentic problems for each project. Additionally, there are stakeholders from a variety of professional backgrounds collaborating in a high-stress environment fueled by time and cost constraints. Within this case study on Project RE, a design & build project performed at the Urban Design Build Studio of Carnegie Mellon University, social network analysis of the project design team will be performed with the main goal of applying social network theory to construction project environments. The research objective is to determine a correlation between the network of how individuals relate to each other on one’s perception of their own professional strengths and weaknesses and the communication patterns within the team and the group dynamics. Data is collected through a survey performed over four rounds conducted monthly, detailed follow-up interviews and constant observations to assess the natural alteration in the network with the effect of time. The data collected is processed by the means of network analytics and in the light of the qualitative data collected with observations and individual interviews. This paper presents the full ethnography of this construction design team of fourteen architecture students based on an elaborate social network data analysis over time. This study is expected to be used as an initial step to perform a refined, targeted and large-scale social network data collection in construction projects in order to deduce the impacts of social networks on project performance and suggest better collaboration structures for construction project teams henceforth.Keywords: construction design teams, construction project management, social network analysis, team collaboration, network analytics
Procedia PDF Downloads 202426 Effects of Artificial Nectar Feeders on Bird Distribution and Erica Visitation Rate in the Cape Fynbos
Authors: Monique Du Plessis, Anina Coetzee, Colleen L. Seymour, Claire N. Spottiswoode
Abstract:
Artificial nectar feeders are used to attract nectarivorous birds to gardens and are increasing in popularity. The costs and benefits of these feeders remain controversial, however. Nectar feeders may have positive effects by attracting nectarivorous birds towards suburbia, facilitating their urban adaptation, and supplementing bird diets when floral resources are scarce. However, this may come at the cost of luring them away from the plants they pollinate in neighboring indigenous vegetation. This study investigated the effect of nectar feeders on an African pollinator-plant mutualism. Given that birds are important pollinators to many fynbos plant species, this study was conducted in gardens and natural vegetation along the urban edge of the Cape Peninsula. Feeding experiments were carried out to compare relative bird abundance and local distribution patterns for nectarivorous birds (i.e., sunbirds and sugarbirds) between feeder and control treatments. Resultant changes in their visitation rates to Erica flowers in the natural vegetation were tested by inspection of their anther ring status. Nectar feeders attracted higher densities of nectarivores to gardens relative to natural vegetation and decreased their densities in the neighboring fynbos, even when floral abundance in the neighboring vegetation was high. The consequent changes to their distribution patterns and foraging behavior decreased their visitation to at least Erica plukenetii flowers (but not to Erica abietina). This study provides evidence that nectar feeders may have positive effects for birds themselves by reducing their urban sensitivity but also highlights the unintended negative effects feeders may have on the surrounding fynbos ecosystem. Given that nectar feeders appear to compete with the flowers of Erica plukenetii, and perhaps those of other Erica species, artificial feeding may inadvertently threaten bird-plant pollination networks.Keywords: avian nectarivores, bird feeders, bird pollination, indirect effects in human-wildlife interactions, sugar water feeders, supplementary feeding
Procedia PDF Downloads 158425 Machine Learning in Agriculture: A Brief Review
Authors: Aishi Kundu, Elhan Raza
Abstract:
"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting
Procedia PDF Downloads 107424 3D Modeling of Flow and Sediment Transport in Tanks with the Influence of Cavity
Authors: A. Terfous, Y. Liu, A. Ghenaim, P. A. Garambois
Abstract:
With increasing urbanization worldwide, it is crucial to sustainably manage sediment flows in urban networks and especially in stormwater detention basins. One key aspect is to propose optimized designs for detention tanks in order to best reduce flood peak flows and in the meantime settle particles. It is, therefore, necessary to understand complex flows patterns and sediment deposition conditions in stormwater detention basins. The aim of this paper is to study flow structure and particle deposition pattern for a given tank geometry in view to control and maximize sediment deposition. Both numerical simulation and experimental works were done to investigate the flow and sediment distribution in a storm tank with a cavity. As it can be indicated, the settle distribution of the particle in a rectangular tank is mainly determined by the flow patterns and the bed shear stress. The flow patterns in a rectangular tank differ with different geometry, entrance flow rate and the water depth. With the changing of flow patterns, the bed shear stress will change respectively, which also play an influence on the particle settling. The accumulation of the particle in the bed changes the conditions at the bottom, which is ignored in the investigations, however it worth much more attention, the influence of the accumulation of the particle on the sedimentation should be important. The approach presented here is based on the resolution of the Reynolds averaged Navier-Stokes equations to account for turbulent effects and also a passive particle transport model. An analysis of particle deposition conditions is presented in this paper in terms of flow velocities and turbulence patterns. Then sediment deposition zones are presented thanks to the modeling with particle tracking method. It is shown that two recirculation zones seem to significantly influence sediment deposition. Due to the possible overestimation of particle trap efficiency with standard wall functions and stick conditions, further investigations seem required for basal boundary conditions based on turbulent kinetic energy and shear stress. These observations are confirmed by experimental investigations processed in the laboratory.Keywords: storm sewers, sediment deposition, numerical simulation, experimental investigation
Procedia PDF Downloads 328423 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks
Authors: Van Trieu, Shouhuai Xu, Yusheng Feng
Abstract:
Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.Keywords: causality, multilevel graph, cyber-attacks, prediction
Procedia PDF Downloads 157422 On Cloud Computing: A Review of the Features
Authors: Assem Abdel Hamed Mousa
Abstract:
The Internet of Things probably already influences your life. And if it doesn’t, it soon will, say computer scientists; Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives. Alan Kay of Apple calls this "Third Paradigm" computing. Ubiquitous computing is essentially the term for human interaction with computers in virtually everything. Ubiquitous computing is roughly the opposite of virtual reality. Where virtual reality puts people inside a computer-generated world, ubiquitous computing forces the computer to live out here in the world with people. Virtual reality is primarily a horse power problem; ubiquitous computing is a very difficult integration of human factors, computer science, engineering, and social sciences. The approach: Activate the world. Provide hundreds of wireless computing devices per person per office, of all scales (from 1" displays to wall sized). This has required new work in operating systems, user interfaces, networks, wireless, displays, and many other areas. We call our work "ubiquitous computing". This is different from PDA's, dynabooks, or information at your fingertips. It is invisible; everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere. The initial incarnation of ubiquitous computing was in the form of "tabs", "pads", and "boards" built at Xerox PARC, 1988-1994. Several papers describe this work, and there are web pages for the Tabs and for the Boards (which are a commercial product now): Ubiquitous computing will drastically reduce the cost of digital devices and tasks for the average consumer. With labor intensive components such as processors and hard drives stored in the remote data centers powering the cloud , and with pooled resources giving individual consumers the benefits of economies of scale, monthly fees similar to a cable bill for services that feed into a consumer’s phone.Keywords: internet, cloud computing, ubiquitous computing, big data
Procedia PDF Downloads 384421 Special Single Mode Fiber Tests of Polarization Mode Dispersion Changes in a Harsh Environment
Authors: Jan Bohata, Stanislav Zvanovec, Matej Komanec, Jakub Jaros, David Hruby
Abstract:
Even though there is a rapid development in new optical networks, still optical communication infrastructures remain composed of thousands of kilometers of aging optical cables. Many of them are located in a harsh environment which contributes to an increased attenuation or induced birefringence of the fibers leading to the increase of polarization mode dispersion (PMD). In this paper, we report experimental results from environmental optical cable tests and characterization in the climate chamber. We focused on the evaluation of optical network reliability in a harsh environment. For this purpose, a special thermal chamber was adopted, targeting to the large temperature changes between -60 °C and 160 C° with defined humidity. Single mode optical cable 230 meters long, having six tubes and a total number of 72 single mode optical fibers was spliced together forming one fiber link, which was afterward tested in the climate chamber. The main emphasis was put to the polarization mode dispersion (PMD) changes, which were evaluated by three different PMD measuring methods (general interferometry technique, scrambled state-of-polarization analysis and polarization optical time domain reflectometer) in order to fully validate obtained results. Moreover, attenuation and chromatic dispersion (CD), as well as the PMD, were monitored using 17 km long single mode optical cable. Results imply a strong PMD dependence on thermal changes, imposing the exceeding 200 % of its value during the exposure to extreme temperatures and experienced more than 20 dB insertion losses in the optical system. The derived statistic is provided in the paper together with an evaluation of such as optical system reliability, which could be a crucial tool for the optical network designers. The environmental tests are further taken in context to our previously published results from long-term monitoring of fundamental parameters within an optical cable placed in a harsh environment in a special outdoor testbed. Finally, we provide a correlation between short-term and long-term monitoring campaigns and statistics, which are necessary for optical network safety and reliability.Keywords: optical fiber, polarization mode dispersion, harsh environment, aging
Procedia PDF Downloads 388420 Machine Learning Approaches to Water Usage Prediction in Kocaeli: A Comparative Study
Authors: Kasim Görenekli, Ali Gülbağ
Abstract:
This study presents a comprehensive analysis of water consumption patterns in Kocaeli province, Turkey, utilizing various machine learning approaches. We analyzed data from 5,000 water subscribers across residential, commercial, and official categories over an 80-month period from January 2016 to August 2022, resulting in a total of 400,000 records. The dataset encompasses water consumption records, weather information, weekends and holidays, previous months' consumption, and the influence of the COVID-19 pandemic.We implemented and compared several machine learning models, including Linear Regression, Random Forest, Support Vector Regression (SVR), XGBoost, Artificial Neural Networks (ANN), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRU). Particle Swarm Optimization (PSO) was applied to optimize hyperparameters for all models.Our results demonstrate varying performance across subscriber types and models. For official subscribers, Random Forest achieved the highest R² of 0.699 with PSO optimization. For commercial subscribers, Linear Regression performed best with an R² of 0.730 with PSO. Residential water usage proved more challenging to predict, with XGBoost achieving the highest R² of 0.572 with PSO.The study identified key factors influencing water consumption, with previous months' consumption, meter diameter, and weather conditions being among the most significant predictors. The impact of the COVID-19 pandemic on consumption patterns was also observed, particularly in residential usage.This research provides valuable insights for effective water resource management in Kocaeli and similar regions, considering Turkey's high water loss rate and below-average per capita water supply. The comparative analysis of different machine learning approaches offers a comprehensive framework for selecting appropriate models for water consumption prediction in urban settings.Keywords: mMachine learning, water consumption prediction, particle swarm optimization, COVID-19, water resource management
Procedia PDF Downloads 19419 Innovation in "Low-Tech" Industries: Portuguese Footwear Industry
Authors: Antonio Marques, Graça Guedes
Abstract:
The Portuguese footwear industry had in the last five years a remarkable performance in the exportation values, the trade balance and others economic indicators. After a long period of difficulties and with a strong reduction of companies and employees since 1994 until 2009, the Portuguese footwear industry changed the strategy and is now a success case between the international players of footwear. Only the Italian industry sells footwear with a higher value than the Portuguese and the distance between them is decreasing year by year. This paper analyses how the Portuguese footwear companies innovate and make innovation, according the classification proposed by the Oslo Manual. Also analyses the strategy follow in the innovation process, as suggested by Freeman and Soete, and shows the linkage between the type of innovation and the strategy of innovation. The research methodology was qualitative and the strategy for data collection was the case study. The qualitative data will be analyzed with the MAXQDA software. The economic results of the footwear companies studied shows differences between all of them and these differences are related with the innovation strategy adopted. The companies focused in product and marketing innovation, oriented to their target market, have higher ratios “turnover per worker” than the companies focused in process innovation. However, all the footwear companies in this “low-tech” industry create value and contribute to a positive foreign trade of 1.310 million euros in 2013. The growth strategies implemented has the participation of the sectorial organizations in several innovative projects. And it’s obvious that cooperation between all of them is a critical element to the performance achieved by the companies and the innovation observed. Can conclude that the Portuguese footwear sector has in the last years an excellent performance (economic results, exportation values, trade balance, brands and international image) and his performance is strongly related with the strategy in innovation followed, the type of innovation and the networks in the cluster. A simplified model, called “Ace of Diamonds”, is proposed by the authors and explains the way how this performance was reached by the seven companies that participate in the study (two of them are the leaders in the setor), and if this model can be used in others traditional and “low-tech” industries.Keywords: footwear, innovation, “low-tech” industry, Oslo manual
Procedia PDF Downloads 381418 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method
Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang
Abstract:
Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series
Procedia PDF Downloads 276