Search results for: convolutional network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4839

Search results for: convolutional network

879 Computational Intelligence and Machine Learning for Urban Drainage Infrastructure Asset Management

Authors: Thewodros K. Geberemariam

Abstract:

The rapid physical expansion of urbanization coupled with aging infrastructure presents a unique decision and management challenges for many big city municipalities. Cities must therefore upgrade and maintain the existing aging urban drainage infrastructure systems to keep up with the demands. Given the overall contribution of assets to municipal revenue and the importance of infrastructure to the success of a livable city, many municipalities are currently looking for a robust and smart urban drainage infrastructure asset management solution that combines management, financial, engineering and technical practices. This robust decision-making shall rely on sound, complete, current and relevant data that enables asset valuation, impairment testing, lifecycle modeling, and forecasting across the multiple asset portfolios. On this paper, predictive computational intelligence (CI) and multi-class machine learning (ML) coupled with online, offline, and historical record data that are collected from an array of multi-parameter sensors are used for the extraction of different operational and non-conforming patterns hidden in structured and unstructured data to determine and produce actionable insight on the current and future states of the network. This paper aims to improve the strategic decision-making process by identifying all possible alternatives; evaluate the risk of each alternative, and choose the alternative most likely to attain the required goal in a cost-effective manner using historical and near real-time urban drainage infrastructure data for urban drainage infrastructures assets that have previously not benefited from computational intelligence and machine learning advancements.

Keywords: computational intelligence, machine learning, urban drainage infrastructure, machine learning, classification, prediction, asset management space

Procedia PDF Downloads 154
878 Synthesis of Highly Porous Cyclowollastonite Bioactive Ceramic

Authors: Mehieddine Bouatrous

Abstract:

Recently bioactive ceramic materials have been applied in the biomedical field as bulk, granular, or coating materials for more than half a century. More recently, bone tissue engineering scaffolds made of highly porous bioactive ceramic, glass-ceramic, and composite materials have also been created. As a result, recent bioactive ceramic structures have a high bioactivity rate, an open pores network, and good mechanical characteristics simulating cortical bone. Cyclowollastonite frameworks are also suggested for use as a graft material. As a porogenous agent, various amounts of the polymethyl methacrylate (PMMA) powders were used in this study successfully to synthesize a highly interrelated, nanostructured porous cyclowollastonite with a large specific surface area where the morphology and porosity were investigated. Porous cyclowollastonite bioactive ceramics were synthesized with a cost-effective and eco-friendly wet chemical method. The synthesized biomaterial is bioactive according to in vitro tests and can be used for bone tissue engineering scaffolds where cyclowollastonite sintered dense discs were submerged in simulated body fluid (S.B.F.) for various periods of time (1-4 weeks), resulting in the formation of a dense and consistent layer of hydroxyapatite on the surface of the ceramics, indicating its good in vitro bioactivity. Therefore, the cyclowollastonite framework exhibits good in vitro bioactivity due to its highly interconnecting porous structure and open macropores. The results demonstrate that even after soaking for several days, the surface of cyclowollastonite ceramic can generate a dense and consistent layer of hydroxyapatite. The results showed that cyclowollastonite framework exhibits good in vitro bioactivity due to highly interconnecting porous structure and open macropores.

Keywords: porous, bioactive, biomaterials, S.B.F, cyclowollastonite, biodegradability

Procedia PDF Downloads 80
877 Security Issues on Smart Grid and Blockchain-Based Secure Smart Energy Management Systems

Authors: Surah Aldakhl, Dafer Alali, Mohamed Zohdy

Abstract:

The next generation of electricity grid infrastructure, known as the "smart grid," integrates smart ICT (information and communication technology) into existing grids in order to alleviate the drawbacks of existing one-way grid systems. Future power systems' efficiency and dependability are anticipated to significantly increase thanks to the Smart Grid, especially given the desire for renewable energy sources. The security of the Smart Grid's cyber infrastructure is a growing concern, though, as a result of the interconnection of significant power plants through communication networks. Since cyber-attacks can destroy energy data, beginning with personal information leaking from grid members, they can result in serious incidents like huge outages and the destruction of power network infrastructure. We shall thus propose a secure smart energy management system based on the Blockchain as a remedy for this problem. The power transmission and distribution system may undergo a transformation as a result of the inclusion of optical fiber sensors and blockchain technology in smart grids. While optical fiber sensors allow real-time monitoring and management of electrical energy flow, Blockchain offers a secure platform to safeguard the smart grid against cyberattacks and unauthorized access. Additionally, this integration makes it possible to see how energy is produced, distributed, and used in real time, increasing transparency. This strategy has advantages in terms of improved security, efficiency, dependability, and flexibility in energy management. An in-depth analysis of the advantages and drawbacks of combining blockchain technology with optical fiber is provided in this paper.

Keywords: smart grids, blockchain, fiber optic sensor, security

Procedia PDF Downloads 123
876 Human Development and Entrepreneurship: Examining the Sources of Freedom and Unfreedom in the Realization of Entrepreneurship in Iran

Authors: Iman Shabanzadeh

Abstract:

The purpose of this research is to understand the lived experience of private sector entrepreneurs in facing the sources of freedom and unfreedom and benefiting from opportunities and basic capabilities in the process of realizing entrepreneurial ability in order to get closer to the macro situation of the narrative of human development in Iranian society. Therefore, the main question of the present research is to figure out what sources of freedom and social opportunities and unfreedom entrepreneurs in Iran's society benefit from the process of transforming their potential entrepreneurial abilities into entrepreneurial and business enterprises. In terms of methodology, the current research method will be thematic analysis in the form of semi-structured interviews with entrepreneurs active in small and medium-sized enterprises in Tehran, whose process of establishing and expanding their entrepreneurial activity has been in the last two decades. By examining the possibilities and refusals of advancing these people in the three stages of 'Idea creation and desire for entrepreneurship’, ‘Starting and creating a business’, and finally, ‘Continuing and expanding the business’, the findings of the research show the impact of five main resources for people to realize their potential talents, from the stage of creating an idea to expanding their business. These sources include' family institution,’ ‘education institution,’ ‘social norms and beliefs,’ ‘government and market,’ and ‘personality components of the entrepreneur.’ Finally, the findings are reported in three levels of basic themes (fifteen items), organizing themes (five items), and comprehensive themes (one item) and in the form of a theme network.

Keywords: entrepreneurship, human development, capability, sources of freedom

Procedia PDF Downloads 58
875 The Impact of Blended Learning on Developing the students' Writing Skills and the Perception of Instructors and Students: Hawassa University in Focus

Authors: Mulu G. Gencha, Gebremedhin Simon, Menna Olango

Abstract:

This study was conducted at Hawassa University (HwU) in the Southern Nation Nationalities Peoples Regional State (SNNPRS) of Ethiopia. The prime concern of this study was to examine the writing performances of experimental and control group students, perception of experimental group students, and subject instructors. The course was blended learning (BL). Blended learning is a hybrid of classroom and on-line learning. Participants were eighty students from the School of Computer Science. Forty students attended the BL delivery involved using Face-to-Face (FTF) and campus-based online instruction. All instructors, fifty, of School of Language and Communication Studies along with 10 FGD members participated in the study. The experimental group went to the computer lab two times a week for four months, March-June, 2012, using the local area network (LAN), and software (MOODLE) writing program. On the other hand, the control group, forty students, took the FTF writing course five times a week for four months in similar academic calendar. The three instruments, the attitude questionnaire, tests and FGD were designed to identify views of students, instructors, and FGD participants on BL. At the end of the study, students’ final course scores were evaluated. Data were analyzed using independent samples t-tests. A statistically, significant difference was found between the FTF and BL (p<0.05). The analysis showed that the BL group was more successful than the conventional group. Besides, both instructors and students had positive attitude towards BL. The final section of the thesis showed the potential benefits and challenges, considering the pedagogical implications for the BL, and recommended possible avenues for further works.

Keywords: blended learning, computer attitudes, computer usefulness, computer liking, computer confidence, computer phobia

Procedia PDF Downloads 412
874 Using Printouts as Social Media Evidence and Its Authentication in the Courtroom

Authors: Chih-Ping Chang

Abstract:

Different from traditional objective evidence, social media evidence has its own characteristics with easily tampering, recoverability, and cannot be read without using other devices (such as a computer). Simply taking a screenshot from social network sites must be questioned its original identity. When the police search and seizure digital information, a common way they use is to directly print out digital data obtained and ask the signature of the parties at the presence, without taking original digital data back. In addition to the issue on its original identity, this conduct to obtain evidence may have another two results. First, it will easily allege that is tampering evidence because the police wanted to frame the suspect and falsified evidence. Second, it is not easy to discovery hidden information. The core evidence associated with crime may not appear in the contents of files. Through discovery the original file, data related to the file, such as the original producer, creation time, modification date, and even GPS location display can be revealed from hidden information. Therefore, how to show this kind of evidence in the courtroom will be arguably the most important task for ruling social media evidence. This article, first, will introduce forensic software, like EnCase, TCT, FTK, and analyze their function to prove the identity with another digital data. Then turning back to the court, the second part of this article will discuss legal standard for authentication of social media evidence and application of that forensic software in the courtroom. As the conclusion, this article will provide a rethinking, that is, what kind of authenticity is this rule of evidence chase for. Does legal system automatically operate the transcription of scientific knowledge? Or furthermore, it wants to better render justice, not only under scientific fact, but through multivariate debating.

Keywords: federal rule of evidence, internet forensic, printouts as evidence, social media evidence, United States v. Vayner

Procedia PDF Downloads 294
873 ADP Approach to Evaluate the Blood Supply Network of Ontario

Authors: Usama Abdulwahab, Mohammed Wahab

Abstract:

This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.

Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem

Procedia PDF Downloads 509
872 Smart Technology Work Practices to Minimize Job Pressure

Authors: Babar Rasheed

Abstract:

The organizations are in continuous effort to increase their yield and to retain their associates, employees. Technology is considered an integral part of attaining apposite work practices, work environment, and employee engagement. Unconsciously, these advanced practices like work from home, personalized intra-network are disturbing employee work-life balance which ultimately increases psychological pressure on employees. The smart work practice is to develop business models and organizational practices with enhanced employee engagement, minimum trouncing of organization resources with persistent revenue and positive addition in global societies. Need of smart work practices comes from increasing employee turnover rate, global economic recession, unnecessary job pressure, increasing contingent workforce and advancement in technologies. Current practices are not enough elastic to tackle global changing work environment and organizational competitions. Current practices are causing many reciprocal problems among employee and organization mechanically. There is conscious understanding among business sectors smart work practices that will deal with new century challenges with addressing the concerns of relevant issues. It is aimed in this paper to endorse customized and smart work practice tools along knowledge framework to manage the growing concerns of employee engagement, use of technology, orgaization concerns and challenges for the business. This includes a Smart Management Information System to address necessary concerns of employees and combine with a framework to extract the best possible ways to allocate companies resources and re-align only required efforts to adopt the best possible strategy for controlling potential risks.

Keywords: employees engagement, management information system, psychological pressure, current and future HR practices

Procedia PDF Downloads 186
871 Effectual Role of Local Level Partnership Schemes in Affordable Housing Delivery

Authors: Hala S. Mekawy

Abstract:

Affordable housing delivery for low and lower middle income families is a prominent problem in many developing countries; governments alone are unable to address this challenge due to diverse financial and regulatory constraints, and the private sector's contribution is rare and assists only middle-income households even when institutional and legal reforms are conducted to persuade it to go down market. Also, the market-enabling policy measures advocated by the World Bank since the early nineties have been strongly criticized and proven to be inappropriate to developing country contexts, where it is highly unlikely that the formal private sector can reach low income population. In addition to governments and private developers, affordable housing delivery systems involve an intricate network of relationships between diverse ranges of actors. Collaboration between them was proven to be vital, and hence, an approach towards partnership schemes for affordable housing delivery has emerged. The basic premise of this paper is that addressing housing affordability challenges in Egypt demands direct public support, as markets and market actors alone would never succeed in delivering decent affordable housing to low and lower middle income groups. It argues that this support would ideally be through local level partnership schemes, with a leading decentralized local government role, and partners being identified according to specific local conditions. It attempts to identify major attributes that would ensure the fulfilment of the goals of such schemes in the Egyptian context. This is based upon evidence from diversified worldwide experiences, in addition to the main outcomes of a questionnaire that was conducted to specialists and chief actors in the field.

Keywords: affordable housing, partnership schemes, housing, urban environments

Procedia PDF Downloads 230
870 Assessment of Genetic Diversity and Population Structure of Goldstripe Sardinella, Sardinella gibbosa in the Transboundary Area of Kenya and Tanzania Using mtDNA and msDNA Markers

Authors: Sammy Kibor, Filip Huyghe, Marc Kochzius, James Kairo

Abstract:

Goldstripe Sardinella, Sardinella gibbosa, (Bleeker, 1849) is a commercially and ecologically important small pelagic fish common in the Western Indian Ocean region. The present study aimed to assess genetic diversity and population structure of the species in the Kenya-Tanzania transboundary area using mtDNA and msDNA markers. Some 630 bp sequence in the mitochondrial DNA (mtDNA) Cytochrome C Oxidase I (COI) and five polymorphic microsatellite DNA loci were analyzed. Fin clips of 309 individuals from eight locations within the transboundary area were collected between July and December 2018. The S. gibbosa individuals from the different locations were distinguishable from one another based on the mtDNA variation, as demonstrated with a neighbor-joining tree and minimum spanning network analysis. None of the identified 22 haplotypes were shared between Kenya and Tanzania. Gene diversity per locus was relatively high (0.271-0.751), highest Fis was 0.391. The structure analysis, discriminant analysis of Principal component (DAPC) and the pair-wise (FST = 0.136 P < 0.001) values after Bonferroni correction using five microsatellite loci provided clear inference on genetic differentiation and thus evidence of population structure of S. gibbosa along the Kenya-Tanzania coast. This study shows a high level of genetic diversity and the presence of population structure (Φst =0.078 P < 0.001) resulting to the existence of four populations giving a clear indication of minimum gene flow among the population. This information has application in the designing of marine protected areas, an important tool for marine conservation.

Keywords: marine connectivity, microsatellites, population genetics, transboundary

Procedia PDF Downloads 126
869 Non-Invasive Data Extraction from Machine Display Units Using Video Analytics

Authors: Ravneet Kaur, Joydeep Acharya, Sudhanshu Gaur

Abstract:

Artificial Intelligence (AI) has the potential to transform manufacturing by improving shop floor processes such as production, maintenance and quality. However, industrial datasets are notoriously difficult to extract in a real-time, streaming fashion thus, negating potential AI benefits. The main example is some specialized industrial controllers that are operated by custom software which complicates the process of connecting them to an Information Technology (IT) based data acquisition network. Security concerns may also limit direct physical access to these controllers for data acquisition. To connect the Operational Technology (OT) data stored in these controllers to an AI application in a secure, reliable and available way, we propose a novel Industrial IoT (IIoT) solution in this paper. In this solution, we demonstrate how video cameras can be installed in a factory shop floor to continuously obtain images of the controller HMIs. We propose image pre-processing to segment the HMI into regions of streaming data and regions of fixed meta-data. We then evaluate the performance of multiple Optical Character Recognition (OCR) technologies such as Tesseract and Google vision to recognize the streaming data and test it for typical factory HMIs and realistic lighting conditions. Finally, we use the meta-data to match the OCR output with the temporal, domain-dependent context of the data to improve the accuracy of the output. Our IIoT solution enables reliable and efficient data extraction which will improve the performance of subsequent AI applications.

Keywords: human machine interface, industrial internet of things, internet of things, optical character recognition, video analytics

Procedia PDF Downloads 112
868 Improvement of Water Quality of Al Asfar Lake Using Constructed Wetland System

Authors: Jamal Radaideh

Abstract:

Al-Asfar Lake is located about 14 km east of Al-Ahsa and is one of the most important wetland lakes in the Al Ahsa/Eastern Province of Saudi Arabia. Al-Ahsa is may be the largest oasis in the world, having an area of 20,000 hectares, in addition, it is of the largest and oldest agricultural centers in the region. The surplus farm irrigation water beside additional water supplied by treated wastewater from Al-Hofuf sewage station is collected by a drainage network and discharged into Al-Asfar Lake. The lake has good wetlands, sand dunes as well as large expanses of open and shallow water. Salt tolerant vegetation is present in some of the shallow areas around the lake, and huge stands of Phragmites reeds occur around the lake. The lake presents an important habitat for wildlife and birds, something not expected to find in a large desert. Although high evaporation rates in the range of 3250 mm are common, the water remains in the evaporation lakes during all seasons of the year is used to supply cattle with drinking water and for aquifer recharge. Investigations showed that high concentrations of nitrogen (N), phosphorus (P), biological oxygen demand (BOD), chemical oxygen demand (COD) and salinity discharge to Al Asfar Lake from the D2 drain exist. It is expected that the majority of BOD, COD and N originates from wastewater discharge and leachate from surplus irrigation water which also contribute to the majority of P and salinity. The significant content of nutrients and biological oxygen demand reduces available oxygen in the water. The present project aimed to improve the water quality of the lake using constructed wetland trains which will be built around the lake. Phragmites reeds, which already occur around the lake, will be used.

Keywords: Al Asfar lake, constructed wetland, water quality, water treatment

Procedia PDF Downloads 454
867 Structural Model on Organizational Climate, Leadership Behavior and Organizational Commitment: Work Engagement of Private Secondary School Teachers in Davao City

Authors: Genevaive Melendres

Abstract:

School administrators face the reality of teachers losing their engagement, or schools losing the teachers. This study is then conducted to identify a structural model that best predict work engagement of private secondary teachers in Davao City. Ninety-three teachers from four sectarian schools and 56 teachers from four non-sectarian schools were involved in the completion of four survey instruments namely Organizational Climate Questionnaire, Leader Behavior Descriptive Questionnaire, Organizational Commitment Scales, and Utrecht Work Engagement Scales. Data were analyzed using frequency distribution, mean, standardized deviation, t-test for independent sample, Pearson r, stepwise multiple regression analysis, and structural equation modeling. Results show that schools have high level of organizational climate dimensions; leaders oftentimes show work-oriented and people-oriented behavior; teachers have high normative commitment and they are very often engaged at their work. Teachers from non-sectarian schools have higher organizational commitment than those from sectarian schools. Organizational climate and leadership behavior are positively related to and predict work engagement whereas commitment did not show any relationship. This study underscores the relative effects of three variables on the work engagement of teachers. After testing network of relationships and evaluating several models, a best-fitting model was found between leadership behavior and work engagement. The noteworthy findings suggest that principals pay attention and consistently evaluate their behavior for this best predicts the work engagement of the teachers. The study provides value to administrators who take decisions and create conditions in which teachers derive fulfillment.

Keywords: leadership behavior, organizational climate, organizational commitment, private secondary school teachers, structural model on work engagement

Procedia PDF Downloads 275
866 Cracking Performance of Bituminous Concrete Mixes Containing High Percentage of RAP Material

Authors: Bicky Agarwal, Ambika Behl, Rajiv Kumar, Ashish Dhamaniya

Abstract:

India ranks second for having the largest road network in the world after the United States (U.S.). According to the National Asphalt Pavement Association (NAPA), the U.S. produced about 94.6 million tons of Reclaimed Asphalt Pavement (RAP) in 2021. Despite the benefits of RAP usage, it is not widely adopted in many countries, including India. Rising asphalt binder costs and environmental concerns have spurred interest in using RAP material in asphalt mixtures. However, increasing RAP content may have adverse effects on certain characteristics of asphalt mixtures, such as cracking resistance. Cracking is a common pavement issue that affects the lifespan and durability of hot-mix asphalt pavements. Assessing cracking resistance is crucial in pavement design. Various laboratory tests and performance indicators are utilized to evaluate cracking resistance. This study aims to use the Texas Overlay Tester (TOT) to assess the impact of reclaimed asphalt pavement (RAP) on the cracking resistance of Bituminous Concrete (BC-II) mixes. Following the Marshall Mix Design method, asphalt mixes with RAP contents of 0% (Control), 30%, 40%, 50%, and 60% were prepared and tested at their Optimum Binder Content (OBC). The ITS results showed that the control mix had an ITS value of 1.2 MPa, with slight decreases observed in mixes containing up to 60% RAP, although these changes were not statistically significant (p=0.538>0.05). The TSR tests indicated that all mixes exceeded the minimum requirement of 80%. The Texas Overlay Test (TOT) was used to evaluate cracking performance and revealed that higher RAP contents had a negative impact on fatigue resistance. The 50% RAP mix exhibited the highest CFE, indicating that it has the best resistance to crack propagation despite having a lower number of cycles to failure. All mixes were categorized as falling into the Soft-crack-resistant quadrant, indicating their ability to resist crack propagation while being more susceptible to crack initiation.

Keywords: RAP, BC-II, HMA, TOT

Procedia PDF Downloads 36
865 Economic Policy Promoting Economically Rational Behavior of Start-Up Entrepreneurs in Georgia

Authors: Gulnaz Erkomaishvili

Abstract:

Introduction: The pandemic and the current economic crisis have created problems for entrepreneurship and, therefore for start-up entrepreneurs. The paper presents the challenges of start-up entrepreneurs in Georgia in the time of pandemic and the analysis of the state economic policy measures. Despite many problems, the study found that in 54.2% of start-ups surveyed under the pandemic, innovation opportunities were growing. It can be stated that the pandemic was a good opportunity to increase the innovative capacity of the enterprise. 52% of the surveyed start-up entrepreneurs managed to adapt to the current situation and increase the sale of their products/services through remote channels. As for the assessment of state support measures by start-up entrepreneurs, a large number of Georgian start-ups do not assess the measures implemented by the state positively. Methodology: The research process uses methods of analysis and synthesis, quantitative and qualitative, interview/survey, grouping, relative and average values, graphing, comparison, data analysis, and others. Main Findings: Studies have shown that for the start-up entrepreneurs, the main problem remains: inaccessible funding, workers' qualifications gap, inflation, taxes, regulation, political instability, inadequate provision of infrastructure, amount of taxes, and other factors. Conclusions: The state should take the following measures to support business start-ups: create an attractive environment for investment, availability of soft loans, creation of an insurance system, infrastructure development, increase the effectiveness of tax policy (simplicity of the tax system, clarity, optimal tax level ); promote export growth (develop strategy for opening up international markets, build up a broad marketing network, etc.).

Keywords: start-up entrepreneurs, startups, start-up entrepreneurs support programs, start-up entrepreneurs support economic policy

Procedia PDF Downloads 117
864 Sulfur-Doped Hierarchically Porous Boron Nitride Nanosheets as an Efficient Carbon Dioxide Adsorbent

Authors: Sreetama Ghosh, Sundara Ramaprabhu

Abstract:

Carbon dioxide gas has been a major cause for the worldwide increase in green house effect, which leads to climate change and global warming. So CO₂ capture & sequestration has become an effective way to reduce the concentration of CO₂ in the environment. One such way to capture CO₂ in porous materials is by adsorption process. A potential material in this aspect is porous hexagonal boron nitride or 'white graphene' which is a well-known two-dimensional layered material with very high thermal stability. It had been investigated that the sample with hierarchical pore structure and high specific surface area shows excellent performance in capturing carbon dioxide gas and thereby mitigating the problem of environmental pollution to the certain extent. Besides, the presence of sulfur as well as nitrogen in the sample synergistically helps in the increase in adsorption capacity. In this work, a cost effective single step synthesis of highly porous boron nitride nanosheets doped with sulfur had been demonstrated. Besides, the CO₂ adsorption-desorption studies were carried on using a pressure reduction technique. The studies show that the nanosheets exhibit excellent cyclic stability in storage performance. Thermodynamic studies suggest that the adsorption takes place mainly through physisorption. The studies show that the nanosheets exhibit excellent cyclic stability in storage performance. Further, the surface modification of the highly porous nano sheets carried out by incorporating ionic liquids had further enhanced the capturing capability of CO₂ gas in the nanocomposite, revealing that this particular material has the potential to be an excellent adsorbent of carbon dioxide gas.

Keywords: CO₂ capture, hexagonal boron nitride nanosheets, porous network, sulfur doping

Procedia PDF Downloads 245
863 Bayesian System and Copula for Event Detection and Summarization of Soccer Videos

Authors: Dhanuja S. Patil, Sanjay B. Waykar

Abstract:

Event detection is a standout amongst the most key parts for distinctive sorts of area applications of video data framework. Recently, it has picked up an extensive interest of experts and in scholastics from different zones. While detecting video event has been the subject of broad study efforts recently, impressively less existing methodology has considered multi-model data and issues related efficiency. Start of soccer matches different doubtful circumstances rise that can't be effectively judged by the referee committee. A framework that checks objectively image arrangements would prevent not right interpretations because of some errors, or high velocity of the events. Bayesian networks give a structure for dealing with this vulnerability using an essential graphical structure likewise the probability analytics. We propose an efficient structure for analysing and summarization of soccer videos utilizing object-based features. The proposed work utilizes the t-cherry junction tree, an exceptionally recent advancement in probabilistic graphical models, to create a compact representation and great approximation intractable model for client’s relationships in an interpersonal organization. There are various advantages in this approach firstly; the t-cherry gives best approximation by means of junction trees class. Secondly, to construct a t-cherry junction tree can be to a great extent parallelized; and at last inference can be performed utilizing distributed computation. Examination results demonstrates the effectiveness, adequacy, and the strength of the proposed work which is shown over a far reaching information set, comprising more soccer feature, caught at better places.

Keywords: summarization, detection, Bayesian network, t-cherry tree

Procedia PDF Downloads 327
862 Neural Network based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The educational system faces a significant concern with regards to Dyslexia and Dysgraphia, which are learning disabilities impacting reading and writing abilities. This is particularly challenging for children who speak the Sinhala language due to its complexity and uniqueness. Commonly used methods to detect the risk of Dyslexia and Dysgraphia rely on subjective assessments, leading to limited coverage and time-consuming processes. Consequently, delays in diagnoses and missed opportunities for early intervention can occur. To address this issue, the project developed a hybrid model that incorporates various deep learning techniques to detect the risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16, and YOLOv8 models were integrated to identify handwriting issues. The outputs of these models were then combined with other input data and fed into an MLP model. Hyperparameters of the MLP model were fine-tuned using Grid Search CV, enabling the identification of optimal values for the model. This approach proved to be highly effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention. The Resnet50 model exhibited a training accuracy of 0.9804 and a validation accuracy of 0.9653. The VGG16 model achieved a training accuracy of 0.9991 and a validation accuracy of 0.9891. The MLP model demonstrated impressive results with a training accuracy of 0.99918, a testing accuracy of 0.99223, and a loss of 0.01371. These outcomes showcase the high accuracy achieved by the proposed hybrid model in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, dyslexia, dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 67
861 A Mega-Analysis of the Predictive Power of Initial Contact within Minimal Social Network

Authors: Cathal Ffrench, Ryan Barrett, Mike Quayle

Abstract:

It is accepted in social psychology that categorization leads to ingroup favoritism, without further thought given to the processes that may co-occur or even precede categorization. These categorizations move away from the conceptualization of the self as a unique social being toward an increasingly collective identity. Subsequently, many individuals derive much of their self-evaluations from these collective identities. The seminal literature on this topic argues that it is primarily categorization that evokes instances of ingroup favoritism. Apropos to these theories, we argue that categorization acts to enhance and further intergroup processes rather than defining them. More accurately, we propose categorization aids initial ingroup contact and this first contact is predictive of subsequent favoritism on individual and collective levels. This analysis focuses on Virtual Interaction APPLication (VIAPPL) based studies, a software interface that builds on the flaws of the original minimal group studies. The VIAPPL allows the exchange of tokens in an intra and inter-group manner. This token exchange is how we classified the first contact. The study involves binary longitudinal analysis to better understand the subsequent exchanges of individuals based on who they first interacted with. Studies were selected on the criteria of evidence of explicit first interactions and two-group designs. Our findings paint a compelling picture in support of a motivated contact hypothesis, which suggests that an individual’s first motivated contact toward another has strong predictive capabilities for future behavior. This contact can lead to habit formation and specific favoritism towards individuals where contact has been established. This has important implications for understanding how group conflict occurs, and how intra-group individual bias can develop.

Keywords: categorization, group dynamics, initial contact, minimal social networks, momentary contact

Procedia PDF Downloads 150
860 LGR5 and Downstream Intracellular Signaling Proteins Play Critical Roles in the Cell Proliferation of Neuroblastoma, Meningioma and Pituitary Adenoma

Authors: Jin Hwan Cheong, Mina Hwang, Myung Hoon Han, Je Il Ryu, Young ha Oh, Seong Ho Koh, Wu Duck Won, Byung Jin Ha

Abstract:

Leucine-rich repeat-containing G-protein coupled receptor 5 (LGR5) has been reported to play critical roles in the proliferation of various cancer cells. However, the roles of LGR5 in brain tumors and the specific intracellular signaling proteins directly associated with it remain unknown. Expression of LGR5 was first measured in normal brain tissue, meningioma, and pituitary adenoma of humans. To identify the downstream signaling pathways of LGR5, siRNA-mediated knockdown of LGR5 was performed in SH-SY5Y neuroblastoma cells followed by proteomics analysis with 2-dimensional polyacrylamide gel electrophoresis (2D-PAGE). In addition, the expression of LGR5-associated proteins was evaluated in LGR5-inꠓhibited neuroblastoma cells and in human normal brain, meningioma, and pituitary adenoma tissue. Proteomics analysis showed 12 protein spots were significantly different in expression level (more than two-fold change) and subsequently identified by peptide mass fingerprinting. A protein association network was constructed from the 12 identified proteins altered by LGR5 knockdown. Direct and indirect interactions were identified among the 12 proteins. HSP 90-beta was one of the proteins whose expression was altered by LGR5 knockdown. Likewise, we observed decreased expression of proteins in the hnRNP subfamily following LGR5 knockdown. In addition, we have for the first time identified significantly higher hnRNP family expression in meningioma and pituitary adenoma compared to normal brain tissue. Taken together, LGR5 and its downstream sigꠓnaling play critical roles in neuroblastoma and brain tumors such as meningioma and pituitary adenoma.

Keywords: LGR5, neuroblastoma, meningioma, pituitary adenoma, hnRNP

Procedia PDF Downloads 61
859 TessPy – Spatial Tessellation Made Easy

Authors: Jonas Hamann, Siavash Saki, Tobias Hagen

Abstract:

Discretization of urban areas is a crucial aspect in many spatial analyses. The process of discretization of space into subspaces without overlaps and gaps is called tessellation. It helps understanding spatial space and provides a framework for analyzing geospatial data. Tessellation methods can be divided into two groups: regular tessellations and irregular tessellations. While regular tessellation methods, like squares-grids or hexagons-grids, are suitable for addressing pure geometry problems, they cannot take the unique characteristics of different subareas into account. However, irregular tessellation methods allow the border between the subareas to be defined more realistically based on urban features like a road network or Points of Interest (POI). Even though Python is one of the most used programming languages when it comes to spatial analysis, there is currently no library that combines different tessellation methods to enable users and researchers to compare different techniques. To close this gap, we are proposing TessPy, an open-source Python package, which combines all above-mentioned tessellation methods and makes them easily accessible to everyone. The core functions of TessPy represent the five different tessellation methods: squares, hexagons, adaptive squares, Voronoi polygons, and city blocks. By using regular methods, users can set the resolution of the tessellation which defines the finesse of the discretization and the desired number of tiles. Irregular tessellation methods allow users to define which spatial data to consider (e.g., amenity, building, office) and how fine the tessellation should be. The spatial data used is open-source and provided by OpenStreetMap. This data can be easily extracted and used for further analyses. Besides the methodology of the different techniques, the state-of-the-art, including examples and future work, will be discussed. All dependencies can be installed using conda or pip; however, the former is more recommended.

Keywords: geospatial data science, geospatial data analysis, tessellations, urban studies

Procedia PDF Downloads 130
858 Human-Centred Data Analysis Method for Future Design of Residential Spaces: Coliving Case Study

Authors: Alicia Regodon Puyalto, Alfonso Garcia-Santos

Abstract:

This article presents a method to analyze the use of indoor spaces based on data analytics obtained from inbuilt digital devices. The study uses the data generated by the in-place devices, such as smart locks, Wi-Fi routers, and electrical sensors, to gain additional insights on space occupancy, user behaviour, and comfort. Those devices, originally installed to facilitate remote operations, report data through the internet that the research uses to analyze information on human real-time use of spaces. Using an in-place Internet of Things (IoT) network enables a faster, more affordable, seamless, and scalable solution to analyze building interior spaces without incorporating external data collection systems such as sensors. The methodology is applied to a real case study of coliving, a residential building of 3000m², 7 floors, and 80 users in the centre of Madrid. The case study applies the method to classify IoT devices, assess, clean, and analyze collected data based on the analysis framework. The information is collected remotely, through the different platforms devices' platforms; the first step is to curate the data, understand what insights can be provided from each device according to the objectives of the study, this generates an analysis framework to be escalated for future building assessment even beyond the residential sector. The method will adjust the parameters to be analyzed tailored to the dataset available in the IoT of each building. The research demonstrates how human-centered data analytics can improve the future spatial design of indoor spaces.

Keywords: in-place devices, IoT, human-centred data-analytics, spatial design

Procedia PDF Downloads 200
857 Soccer, a Major Social Changing Factor: Kosovo Case

Authors: Armend Kelmendi, Adnan Ahmeti

Abstract:

The purpose of our study was to assess the impact of soccer in the overall wealth fare (education, health, and economic prosperity) of youth in Kosovo (age: 7-18). The research conducted measured a number of parameters (training methodologies, conditions, community leadership impact) in a sample consisting of 6 different football clubs’ academies across the country. Fifty (50) male and female football youngsters volunteered in this study. To generate more reliable results, the analysis was conducted with the help of a set of effective project management tools and techniques (Gantt chart, Logic Network, PERT chart, Work Breakdown Structure, and Budgeting Analysis). The interviewees were interviewed under a specific lens of categories (impact in education, health, and economic prosperity). A set of questions were asked i.e. what has football provided to you and the community you live in?; Did football increase your confidence and shaped your life for better?; What was the main reason you started training in football? The results generated explain how a single sport, namely that of football in Kosovo can make a huge social change, improving key social factors in a society. There was a considerable difference between the youth clubs as far as training conditions are concerned. The study found out that despite financial constraints, two out of six clubs managed to produce twice as more talented players that were introduced to professional primary league teams in Kosovo and Albania, including other soccer teams in the region, Europe, and Asia. The study indicates that better sports policy must be formulated and associated with important financial investments in soccer for it to be considered fruitful and beneficial for players of 18 plus years of age, namely professionals.

Keywords: youth, prosperity, conditions, investments, growth, free movement

Procedia PDF Downloads 245
856 Adjusting Electricity Demand Data to Account for the Impact of Loadshedding in Forecasting Models

Authors: Migael van Zyl, Stefanie Visser, Awelani Phaswana

Abstract:

The electricity landscape in South Africa is characterized by frequent occurrences of loadshedding, a measure implemented by Eskom to manage electricity generation shortages by curtailing demand. Loadshedding, classified into stages ranging from 1 to 8 based on severity, involves the systematic rotation of power cuts across municipalities according to predefined schedules. However, this practice introduces distortions in recorded electricity demand, posing challenges to accurate forecasting essential for budgeting, network planning, and generation scheduling. Addressing this challenge requires the development of a methodology to quantify the impact of loadshedding and integrate it back into metered electricity demand data. Fortunately, comprehensive records of loadshedding impacts are maintained in a database, enabling the alignment of Loadshedding effects with hourly demand data. This adjustment ensures that forecasts accurately reflect true demand patterns, independent of loadshedding's influence, thereby enhancing the reliability of electricity supply management in South Africa. This paper presents a methodology for determining the hourly impact of load scheduling and subsequently adjusting historical demand data to account for it. Furthermore, two forecasting models are developed: one utilizing the original dataset and the other using the adjusted data. A comparative analysis is conducted to evaluate forecast accuracy improvements resulting from the adjustment process. By implementing this methodology, stakeholders can make more informed decisions regarding electricity infrastructure investments, resource allocation, and operational planning, contributing to the overall stability and efficiency of South Africa's electricity supply system.

Keywords: electricity demand forecasting, load shedding, demand side management, data science

Procedia PDF Downloads 64
855 Ground Short Circuit Contributions of a MV Distribution Line Equipped with PWMSC

Authors: Mohamed Zellagui, Heba Ahmed Hassan

Abstract:

This paper proposes a new approach for the calculation of short-circuit parameters in the presence of Pulse Width Modulated based Series Compensator (PWMSC). PWMSC is a newly Flexible Alternating Current Transmission System (FACTS) device that can modulate the impedance of a transmission line through applying a variation to the duty cycle (D) of a train of pulses with fixed frequency. This results in an improvement of the system performance as it provides virtual compensation of distribution line impedance by injecting controllable apparent reactance in series with the distribution line. This controllable reactance can operate in both capacitive and inductive modes and this makes PWMSC highly effective in controlling the power flow and increasing system stability in the system. The purpose of this work is to study the impact of fault resistance (RF) which varies between 0 to 30 Ω on the fault current calculations in case of a ground fault and a fixed fault location. The case study is for a medium voltage (MV) Algerian distribution line which is compensated by PWMSC in the 30 kV Algerian distribution power network. The analysis is based on symmetrical components method which involves the calculations of symmetrical components of currents and voltages, without and with PWMSC in both cases of maximum and minimum duty cycle value for capacitive and inductive modes. The paper presents simulation results which are verified by the theoretical analysis.

Keywords: pulse width modulated series compensator (pwmsc), duty cycle, distribution line, short-circuit calculations, ground fault, symmetrical components method

Procedia PDF Downloads 503
854 Linking Adaptation to Climate Change and Sustainable Development: The Case of ClimAdaPT.Local in Portugal

Authors: A. F. Alves, L. Schmidt, J. Ferrao

Abstract:

Portugal is one of the more vulnerable European countries to the impacts of climate change. These include: temperature increase; coastal sea level rise; desertification and drought in the countryside; and frequent and intense extreme weather events. Hence, adaptation strategies to climate change are of great importance. This is what was addressed by ClimAdaPT.Local. This policy-oriented project had the main goal of developing 26 Municipal Adaptation Strategies for Climate Change, through the identification of local specific present and future vulnerabilities, the training of municipal officials, and the engagement of local communities. It is intended to be replicated throughout the whole territory and to stimulate the creation of a national network of local adaptation in Portugal. Supported by methodologies and tools specifically developed for this project, our paper is based on the surveys, training and stakeholder engagement workshops implemented at municipal level. In an 'adaptation-as-learning' process, these tools functioned as a social-learning platform and an exercise in knowledge and policy co-production. The results allowed us to explore the nature of local vulnerabilities and the exposure of gaps in the context of reappraisal of both future climate change adaptation opportunities and possible dysfunctionalities in the governance arrangements of municipal Portugal. Development issues are highlighted when we address the sectors and social groups that are both more sensitive and more vulnerable to the impacts of climate change. We argue that a pluralistic dialogue and a common framing can be established between them, with great potential for transformational adaptation. Observed climate change, present-day climate variability and future expectations of change are great societal challenges which should be understood in the context of the sustainable development agenda.

Keywords: adaptation, ClimAdaPT.Local, climate change, Portugal, sustainable development

Procedia PDF Downloads 201
853 Comparative Analysis of Data Gathering Protocols with Multiple Mobile Elements for Wireless Sensor Network

Authors: Bhat Geetalaxmi Jairam, D. V. Ashoka

Abstract:

Wireless Sensor Networks are used in many applications to collect sensed data from different sources. Sensed data has to be delivered through sensors wireless interface using multi-hop communication towards the sink. The data collection in wireless sensor networks consumes energy. Energy consumption is the major constraints in WSN .Reducing the energy consumption while increasing the amount of generated data is a great challenge. In this paper, we have implemented two data gathering protocols with multiple mobile sinks/elements to collect data from sensor nodes. First, is Energy-Efficient Data Gathering with Tour Length-Constrained Mobile Elements in Wireless Sensor Networks (EEDG), in which mobile sinks uses vehicle routing protocol to collect data. Second is An Intelligent Agent-based Routing Structure for Mobile Sinks in WSNs (IAR), in which mobile sinks uses prim’s algorithm to collect data. Authors have implemented concepts which are common to both protocols like deployment of mobile sinks, generating visiting schedule, collecting data from the cluster member. Authors have compared the performance of both protocols by taking statistics based on performance parameters like Delay, Packet Drop, Packet Delivery Ratio, Energy Available, Control Overhead. Authors have concluded this paper by proving EEDG is more efficient than IAR protocol but with few limitations which include unaddressed issues likes Redundancy removal, Idle listening, Mobile Sink’s pause/wait state at the node. In future work, we plan to concentrate more on these limitations to avail a new energy efficient protocol which will help in improving the life time of the WSN.

Keywords: aggregation, consumption, data gathering, efficiency

Procedia PDF Downloads 500
852 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 121
851 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection

Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine

Abstract:

Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.

Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine

Procedia PDF Downloads 270
850 Cognitive Science Based Scheduling in Grid Environment

Authors: N. D. Iswarya, M. A. Maluk Mohamed, N. Vijaya

Abstract:

Grid is infrastructure that allows the deployment of distributed data in large size from multiple locations to reach a common goal. Scheduling data intensive applications becomes challenging as the size of data sets are very huge in size. Only two solutions exist in order to tackle this challenging issue. First, computation which requires huge data sets to be processed can be transferred to the data site. Second, the required data sets can be transferred to the computation site. In the former scenario, the computation cannot be transferred since the servers are storage/data servers with little or no computational capability. Hence, the second scenario can be considered for further exploration. During scheduling, transferring huge data sets from one site to another site requires more network bandwidth. In order to mitigate this issue, this work focuses on incorporating cognitive science in scheduling. Cognitive Science is the study of human brain and its related activities. Current researches are mainly focused on to incorporate cognitive science in various computational modeling techniques. In this work, the problem solving approach of human brain is studied and incorporated during the data intensive scheduling in grid environments. Here, a cognitive engine is designed and deployed in various grid sites. The intelligent agents present in CE will help in analyzing the request and creating the knowledge base. Depending upon the link capacity, decision will be taken whether to transfer data sets or to partition the data sets. Prediction of next request is made by the agents to serve the requesting site with data sets in advance. This will reduce the data availability time and data transfer time. Replica catalog and Meta data catalog created by the agents assist in decision making process.

Keywords: data grid, grid workflow scheduling, cognitive artificial intelligence

Procedia PDF Downloads 395