Search results for: embedded network
2817 A Survey of Dynamic QoS Methods in Sofware Defined Networking
Authors: Vikram Kalekar
Abstract:
Modern Internet Protocol (IP) networks deploy traditional and modern Quality of Service (QoS) management methods to ensure the smooth flow of network packets during regular operations. SDN (Software-defined networking) networks have also made headway into better service delivery by means of novel QoS methodologies. While many of these techniques are experimental, some of them have been tested extensively in controlled environments, and few of them have the potential to be deployed widely in the industry. With this survey, we plan to analyze the approaches to QoS and resource allocation in SDN, and we will try to comment on the possible improvements to QoS management in the context of SDN.Keywords: QoS, policy, congestion, flow management, latency, delay index terms-SDN, delay
Procedia PDF Downloads 1932816 Review of Different Machine Learning Algorithms
Authors: Syed Romat Ali Shah, Bilal Shoaib, Saleem Akhtar, Munib Ahmad, Shahan Sadiqui
Abstract:
Classification is a data mining technique, which is recognizedon Machine Learning (ML) algorithm. It is used to classifythe individual articlein a knownofinformation into a set of predefinemodules or group. Web mining is also a portion of that sympathetic of data mining methods. The main purpose of this paper to analysis and compare the performance of Naïve Bayse Algorithm, Decision Tree, K-Nearest Neighbor (KNN), Artificial Neural Network (ANN)and Support Vector Machine (SVM). This paper consists of different ML algorithm and their advantages and disadvantages and also define research issues.Keywords: Data Mining, Web Mining, classification, ML Algorithms
Procedia PDF Downloads 3032815 Sharing Tacit Knowledge: The Essence of Knowledge Management
Authors: Ayesha Khatun
Abstract:
In 21st century where markets are unstable, technologies rapidly proliferate, competitors multiply, products and services become obsolete almost overnight and customers demand low cost high value product, leveraging and harnessing knowledge is not just a potential source of competitive advantage rather a necessity in technology based and information intensive industries. Knowledge management focuses on leveraging the available knowledge and sharing the same among the individuals in the organization so that the employees can make best use of it towards achieving the organizational goals. Knowledge is not a discrete object. It is embedded in people and so difficult to transfer outside the immediate context that it becomes a major competitive advantage. However, internal transfer of knowledge among the employees is essential to maximize the use of knowledge available in the organization in an unstructured manner. But as knowledge is the source of competitive advantage for the organization it is also the source of competitive advantage for the individuals. People think that knowledge is power and sharing the same may lead to lose the competitive position. Moreover, the very nature of tacit knowledge poses many difficulties in sharing the same. But sharing tacit knowledge is the vital part of knowledge management process because it is the tacit knowledge which is inimitable. Knowledge management has been made synonymous with the use of software and technology leading to the management of explicit knowledge only ignoring personal interaction and forming of informal networks which are considered as the most successful means of sharing tacit knowledge. Factors responsible for effective sharing of tacit knowledge are grouped into –individual, organizational and technological factors. Different factors under each category have been identified. Creating a positive organizational culture, encouraging personal interaction, practicing reward system are some of the strategies that can help to overcome many of the barriers to effective sharing of tacit knowledge. Methodology applied here is completely secondary. Extensive review of relevant literature has been undertaken for the purpose.Keywords: knowledge, tacit knowledge, knowledge management, sustainable competitive advantage, organization, knowledge sharing
Procedia PDF Downloads 3982814 Determining Design Parameters for Sizing of Hydronic Heating Systems in Concrete Thermally Activated Building Systems
Authors: Rahmat Ali, Inamullah Khan, Amjad Naseer, Abid A. Shah
Abstract:
Hydronic Heating and Cooling systems in concrete slab based buildings are increasingly becoming a popular substitute to conventional heating and cooling systems. In exploring the materials, techniques employed, and their relative performance measures, a fair bit of uncertainty exists. This research has identified the simplest method of determining the thermal field of a single hydronic pipe when acting as a part of a concrete slab, based on which the spacing and positioning of pipes for a best thermal performance and surface temperature control are determined. The pipe material chosen is the commonly used PEX pipe, which has an all-around performance and thermal characteristics with a thermal conductivity of 0.5W/mK. Concrete Test samples were constructed and their thermal fields tested under varying input conditions. Temperature sensing devices were embedded into the wet concrete at fixed distances from the pipe and other touch sensing temperature devices were employed for determining the extent of the thermal field and validation studies. In the first stage, it was found that the temperature along a specific distance was the same and that heat dissipation occurred in well-defined layers. The temperature obtained in concrete was then related to the different control parameters including water supply temperature. From the results, the temperature of water required for a specific temperature rise in concrete is determined. The thermally effective area is also determined which is then used to calculate the pipe spacing and positioning for the desired level of thermal comfort.Keywords: thermally activated building systems, concrete slab temperature, thermal field, energy efficiency, thermal comfort, pipe spacing
Procedia PDF Downloads 3372813 Songwriting in the Postdigital Age: Using TikTok and Instagram as Online Informal Learning Technologies
Authors: Matthias Haenisch, Marc Godau, Julia Barreiro, Dominik Maxelon
Abstract:
In times of ubiquitous digitalization and the increasing entanglement of humans and technologies in musical practices in the 21st century, it is to be asked, how popular musicians learn in the (post)digital Age. Against the backdrop of the increasing interest in transferring informal learning practices into formal settings of music education the interdisciplinary research association »MusCoDA – Musical Communities in the (Post)Digital Age« (University of Erfurt/University of Applied Sciences Clara Hoffbauer Potsdam, funded by the German Ministry of Education and Research, pursues the goal to derive an empirical model of collective songwriting practices from the study of informal lelearningf songwriters and bands that can be translated into pedagogical concepts for music education in schools. Drawing on concepts from Community of Musical Practice and Actor Network Theory, lelearnings considered not only as social practice and as participation in online and offline communities, but also as an effect of heterogeneous networks composed of human and non-human actors. Learning is not seen as an individual, cognitive process, but as the formation and transformation of actor networks, i.e., as a practice of assembling and mediating humans and technologies. Based on video stimulated recall interviews and videography of online and offline activities, songwriting practices are followed from the initial idea to different forms of performance and distribution. The data evaluation combines coding and mapping methods of Grounded Theory Methodology and Situational Analysis. This results in network maps in which both the temporality of creative practices and the material and spatial relations of human and technological actors are reconstructed. In addition, positional analyses document the power relations between the participants that structure the learning process of the field. In the area of online informal lelearninginitial key research findings reveal a transformation of the learning subject through the specific technological affordances of TikTok and Instagram and the accompanying changes in the learning practices of the corresponding online communities. Learning is explicitly shaped by the material agency of online tools and features and the social practices entangled with these technologies. Thus, any human online community member can be invited to directly intervene in creative decisions that contribute to the further compositional and structural development of songs. At the same time, participants can provide each other with intimate insights into songwriting processes in progress and have the opportunity to perform together with strangers and idols. Online Lelearnings characterized by an increase in social proximity, distribution of creative agency and informational exchange between participants. While it seems obvious that traditional notions not only of lelearningut also of the learning subject cannot be maintained, the question arises, how exactly the observed informal learning practices and the subject that emerges from the use of social media as online learning technologies can be transferred into contexts of formal learningKeywords: informal learning, postdigitality, songwriting, actor-network theory, community of musical practice, social media, TikTok, Instagram, apps
Procedia PDF Downloads 1262812 Behavioral Pattern of 2G Mobile Internet Subscribers: A Study on an Operator of Bangladesh
Authors: Azfar Adib
Abstract:
Like many other countries of the world, mobile internet has been playing a key role in the growth of internet subscriber base in Bangladesh. This study has attempted to identify particular behavioral or usage patterns of 2G mobile internet subscribers who were using the service of the topmost internet service provider (as well as the top mobile operator) of Bangladesh prior to the launching of 3G services (when 2G was fully dominant). It contains some comprehensive analysis carried on different info regarding 2G mobile internet subscribers, obtained from the operator’s own network insights.This is accompanied by the results of a survey conducted among 40 high-frequency users of this service.Keywords: mobile internet, Symbian, Android, iPhone
Procedia PDF Downloads 4382811 Ontology-Based Approach for Temporal Semantic Modeling of Social Networks
Authors: Souâad Boudebza, Omar Nouali, Faiçal Azouaou
Abstract:
Social networks have recently gained a growing interest on the web. Traditional formalisms for representing social networks are static and suffer from the lack of semantics. In this paper, we will show how semantic web technologies can be used to model social data. The SemTemp ontology aligns and extends existing ontologies such as FOAF, SIOC, SKOS and OWL-Time to provide a temporal and semantically rich description of social data. We also present a modeling scenario to illustrate how our ontology can be used to model social networks.Keywords: ontology, semantic web, social network, temporal modeling
Procedia PDF Downloads 3872810 Localized Variabilities in Traffic-related Air Pollutant Concentrations Revealed Using Compact Sensor Networks
Authors: Eric A. Morris, Xia Liu, Yee Ka Wong, Greg J. Evans, Jeff R. Brook
Abstract:
Air quality monitoring stations tend to be widely distributed and are often located far from major roadways, thus, determining where, when, and which traffic-related air pollutants (TRAPs) have the greatest impact on public health becomes a matter of extrapolation. Compact, multipollutant sensor systems are an effective solution as they enable several TRAPs to be monitored in a geospatially dense network, thus filling in the gaps between conventional monitoring stations. This work describes two applications of one such system named AirSENCE for gathering actionable air quality data relevant to smart city infrastructures. In the first application, four AirSENCE devices were co-located with traffic monitors around the perimeter of a city block in Oshawa, Ontario. This study, which coincided with the COVID-19 outbreak of 2020 and subsequent lockdown measures, demonstrated a direct relationship between decreased traffic volumes and TRAP concentrations. Conversely, road construction was observed to cause elevated TRAP levels while reducing traffic volumes, illustrating that conventional smart city sensors such as traffic counters provide inadequate data for inferring air quality conditions. The second application used two AirSENCE sensors on opposite sides of a major 2-way commuter road in Toronto. Clear correlations of TRAP concentrations with wind direction were observed, which shows that impacted areas are not necessarily static and may exhibit high day-to-day variability in air quality conditions despite consistent traffic volumes. Both of these applications provide compelling evidence favouring the inclusion of air quality sensors in current and future smart city infrastructure planning. Such sensors provide direct measurements that are useful for public health alerting as well as decision-making for projects involving traffic mitigation, heavy construction, and urban renewal efforts.Keywords: distributed sensor network, continuous ambient air quality monitoring, Smart city sensors, Internet of Things, traffic-related air pollutants
Procedia PDF Downloads 722809 Routing Metrics and Protocols for Wireless Mesh Networks
Authors: Samira Kalantary, Zohre Saatzade
Abstract:
Wireless Mesh Networks (WMNs) are low-cost access networks built on cooperative routing over a backbone composed of stationary wireless routers. WMNs must deal with the highly unstable wireless medium. Thus, routing metrics and protocols are evolving by designing algorithms that consider link quality to choose the best routes. In this work, we analyse the state of the art in WMN metrics and propose taxonomy for WMN routing protocols. Performance measurements of a wireless mesh network deployed using various routing metrics are presented and corroborate our analysis.Keywords: wireless mesh networks, routing protocols, routing metrics, bioinformatics
Procedia PDF Downloads 4532808 Digital Athena – Contemporary Commentaries and Greek Mythology Explored through 3D Printing
Authors: Rose Lastovicka, Bernard Guy, Diana Burton
Abstract:
Greek myth and art acted as tools to think with, and a lens through which to explore complex topics as a form of social media. In particular, coins were a form of propaganda to communicate the wealth and power of the city-states they originated from as they circulated from person to person. From this, how can the application of 3D printing technologies explore the infusion of ancient forms with contemporary commentaries to promote discussion? The digital reconstruction of artifacts is a topic that has been researched by various groups all over the globe. Yet, the exploration of Greek myth through artifacts infused with contemporary issues is currently unexplored in this medium. Using the Stratasys J750 3D printer - a multi-material, full-colour 3D printer - a series of coins inspired by ancient Greek currency and myth was created to present commentaries on the adversities surrounding individuals in the LGBT+ community. Using the J750 as the medium for expression allows for complete control and precision of the models to create complex high-resolution iconography. The coins are printed with a hard, translucent material with coloured 3D visuals embedded into the coin to then be viewed in close contact by the audience. These coins as commentaries present an avenue for wider understanding by drawing perspectives not only from sources concerned with the contemporary LGBT+ community but also from sources exploring ancient homosexuality and the perception and regulation of it in antiquity. By displaying what are usually points of contention between anti- and pro-LGBT+ parties, this visual medium opens up a discussion to both parties, suggesting heritage can play a vital interpretative role in the contemporary world.Keywords: 3D printing, design, Greek mythology, LGBT+ community
Procedia PDF Downloads 1162807 Graphene-reinforced Metal-organic Framework Derived Cobalt Sulfide/Carbon Nanocomposites as Efficient Multifunctional Electrocatalysts
Authors: Yongde Xia, Laicong Deng, Zhuxian Yang
Abstract:
Developing cost-effective electrocatalysts for oxygen reduction reaction (ORR), oxygen evolution reaction (OER) and hydrogen evolution reaction (HER) is vital in energy conversion and storage applications. Herein, we report a simple method for the synthesis of graphene-reinforced cobalt sulfide/carbon nanocomposites and the evaluation of their electrocatalytic performance for typical electrocatalytic reactions. Nanocomposites of cobalt sulfide embedded in N, S co-doped porous carbon and graphene (CoS@C/Graphene) were generated via simultaneous sulfurization and carbonization of one-pot synthesized graphite oxide-ZIF-67 precursors. The obtained CoS@C/Graphene nanocomposite was characterized by X-ray diffraction, Raman spectroscopy, Thermogravimetric analysis-Mass spectroscopy, Scanning electronic microscopy, Transmission electronic microscopy, X-ray photoelectron spectroscopy and gas sorption. It was found that cobalt sulfide nanoparticles were homogenously dispersed in the in-situ formed N, S co-doped porous carbon/Graphene matrix. The CoS@C/10Graphene composite not only shows excellent electrocatalytic activity toward ORR with high onset potential of 0.89 V, four-electron pathway and superior durability of maintaining 98% current after continuously running for around 5 hours, but also exhibits good performance for OER and HER, due to the improved electrical conductivity, increased catalytic active sites and connectivity between the electrocatalytic active cobalt sulfide and the carbon matrix. This work offers a new approach for the development of novel multifunctional nanocomposites for the next generation of energy conversion and storage applications.Keywords: MOF derivative, graphene, electrocatalyst, oxygen reduction reaction, oxygen evolution reaction, hydrogen evolution reaction
Procedia PDF Downloads 502806 Geographic Information System Cloud for Sustainable Digital Water Management: A Case Study
Authors: Mohamed H. Khalil
Abstract:
Water is one of the most crucial elements which influence human lives and development. Noteworthy, over the last few years, GIS plays a significant role in optimizing water management systems, especially after exponential developing in this sector. In this context, the Egyptian government initiated an advanced ‘GIS-Web Based System’. This system is efficiently designed to tangibly assist and optimize the complement and integration of data between departments of Call Center, Operation and Maintenance, and laboratory. The core of this system is a unified ‘Data Model’ for all the spatial and tabular data of the corresponding departments. The system is professionally built to provide advanced functionalities such as interactive data collection, dynamic monitoring, multi-user editing capabilities, enhancing data retrieval, integrated work-flow, different access levels, and correlative information record/track. Noteworthy, this cost-effective system contributes significantly not only in the completeness of the base-map (93%), the water network (87%) in high level of details GIS format, enhancement of the performance of the customer service, but also in reducing the operating costs/day-to-day operations (~ 5-10 %). In addition, the proposed system facilitates data exchange between different departments (Call Center, Operation and Maintenance, and laboratory), which allowed a better understanding/analyzing of complex situations. Furthermore, this system reflected tangibly on: (i) dynamic environmental monitor/water quality indicators (ammonia, turbidity, TDS, sulfate, iron, pH, etc.), (ii) improved effectiveness of the different water departments, (iii) efficient deep advanced analysis, (iv) advanced web-reporting tools (daily, weekly, monthly, quarterly, and annually), (v) tangible planning synthesizing spatial and tabular data; and finally, (vi) scalable decision support system. It is worth to highlight that the proposed future plan (second phase) of this system encompasses scalability will extend to include integration with departments of Billing and SCADA. This scalability will comprise advanced functionalities in association with the existing one to allow further sustainable contributions.Keywords: GIS Web-Based, base-map, water network, decision support system
Procedia PDF Downloads 962805 A Corpus-Based Analysis of "MeToo" Discourse in South Korea: Coverage Representation in Korean Newspapers
Authors: Sun-Hee Lee, Amanda Kraley
Abstract:
The “MeToo” movement is a social movement against sexual abuse and harassment. Though the hashtag went viral in 2017 following different cultural flashpoints in different countries, the initial response was quiet in South Korea. This radically changed in January 2018, when a high-ranking senior prosecutor, Seo Ji-hyun, gave a televised interview discussing being sexually assaulted by a colleague. Acknowledging public anger, particularly among women, on the long-existing problems of sexual harassment and abuse, the South Korean media have focused on several high-profile cases. Analyzing the media representation of these cases is a window into the evolving South Korean discourse around “MeToo.” This study presents a linguistic analysis of “MeToo” discourse in South Korea by utilizing a corpus-based approach. The term corpus (pl. corpora) is used to refer to electronic language data, that is, any collection of recorded instances of spoken or written language. A “MeToo” corpus has been collected by extracting newspaper articles containing the keyword “MeToo” from BIGKinds, big data analysis, and service and Nexis Uni, an online academic database search engine, to conduct this language analysis. The corpus analysis explores how Korean media represent accusers and the accused, victims and perpetrators. The extracted data includes 5,885 articles from four broadsheet newspapers (Chosun, JoongAng, Hangyore, and Kyunghyang) and 88 articles from two Korea-based English newspapers (Korea Times and Korea Herald) between January 2017 and November 2020. The information includes basic data analysis with respect to keyword frequency and network analysis and adds refined examinations of select corpus samples through naming strategies, semantic relations, and pragmatic properties. Along with the exponential increase of the number of articles containing the keyword “MeToo” from 104 articles in 2017 to 3,546 articles in 2018, the network and keyword analysis highlights ‘US,’ ‘Harvey Weinstein’, and ‘Hollywood,’ as keywords for 2017, with articles in 2018 highlighting ‘Seo Ji-Hyun, ‘politics,’ ‘President Moon,’ ‘An Ui-Jeong, ‘Lee Yoon-taek’ (the names of perpetrators), and ‘(Korean) society.’ This outcome demonstrates the shift of media focus from international affairs to domestic cases. Another crucial finding is that word ‘defamation’ is widely distributed in the “MeToo” corpus. This relates to the South Korean legal system, in which a person who defames another by publicly alleging information detrimental to their reputation—factual or fabricated—is punishable by law (Article 307 of the Criminal Act of Korea). If the defamation occurs on the internet, it is subject to aggravated punishment under the Act on Promotion of Information and Communications Network Utilization and Information Protection. These laws, in particular, have been used against accusers who have publicly come forward in the wake of “MeToo” in South Korea, adding an extra dimension of risk. This corpus analysis of “MeToo” newspaper articles contributes to the analysis of the media representation of the “MeToo” movement and sheds light on the shifting landscape of gender relations in the public sphere in South Korea.Keywords: corpus linguistics, MeToo, newspapers, South Korea
Procedia PDF Downloads 2232804 Primary Melanocytic Tumors of the Central Nervous System: A Clinico-Pathological Study of Seven Cases
Authors: Sushila Jaiswal, Awadhesh Kumar Jaiswal
Abstract:
Background: Primary melanocytic tumors of the central nervous system (CNS) are uncommon lesions and arise from the melanocytes located within the leptomeninges. Aim and objective: The aim of the study was to evaluate the clinical details, histomorphology of the primary melanocytic tumor of CNS. Method: The study was performed by the retrospective review of the case records of the primary melanocytic tumors of CNS diagnosed in our department. The formalin-fixed, paraffin embedded tissue blocks and tissue sections were retrieved and reviewed. Results: Seven cases (6 males, 1 female; age range- 16-40 years; mean age- 27 years) of primary melanocytic tumors of CNS were retrieved over last seven years. The tumor was intracranial (n=5; frontal – 1 case, parietal – 1 case, cerebello-pontine angle- 1 case, occipital -1 case, foramen magnum-1 case) and intra spinal (n=2; cervical – 2 cases). All patients presented with the neurological deficits related to the location of the tumor. Four cases were malignant melanoma; two were melanocytoma of intermediate grade and remaining one was melanocytoma. On histopathology, melanocytoma and melanoma both displayed sheets of well-differentiated melanocytes having round to oval nuclei with finely dispersed chromatin, occasional single eosinophilic nucleoli and a moderate amount of cytoplasm with abundant granular melanin pigment. The absence of mitosis and macronucleoli was noticed in melanocytoma while melanoma showed frequent mitosis and macronucleoli. On immunohistochemistry, both showed diffuse strong HMB45 and S-100 immunopositivity. Conclusion: Primary melanocytic tumors of CNS are rare and predominantly seen in males. It is important to differentiate melanoma from melanocytoma as prognosis of later is good.Keywords: melanocytoma, melanoma, brain tumor, melanin
Procedia PDF Downloads 2332803 Looking beyond Lynch's Image of a City
Authors: Sandhya Rao
Abstract:
Kevin Lynch’s Theory on Imeageability, let on explore a city in terms of five elements, Nodes, Paths, Edges, landmarks and Districts. What happens when we try to record the same data in an Indian context? What happens when we apply the same theory of Imageability to a complex shifting urban pattern of the Indian cities and how can we as Urban Designers demonstrate our role in the image building ordeal of these cities? The organizational patterns formed through mental images, of an Indian city is often diverse and intangible. It is also multi layered and temporary in terms of the spirit of the place. The pattern of images formed is loaded with associative meaning and intrinsically linked with the history and socio-cultural dominance of the place. The embedded memory of a place in one’s mind often plays an even more important role while formulating these images. Thus while deriving an image of a city one is often confused or finds the result chaotic. The images formed due to its complexity are further difficult to represent using a single medium. Under such a scenario it’s difficult to derive an output of an image constructed as well as make design interventions to enhance the legibility of a place. However, there can be a combination of tools and methods that allows one to record the key elements of a place through time, space and one’s user interface with the place. There has to be a clear understanding of the participant groups of a place and their time and period of engagement with the place as well. How we can translate the result obtained into a design intervention at the end, is the main of the research. Could a multi-faceted cognitive mapping be an answer to this or could it be a very transient mapping method which can change over time, place and person. How does the context influence the process of image building in one’s mind? These are the key questions that this research will aim to answer.Keywords: imageability, organizational patterns, legibility, cognitive mapping
Procedia PDF Downloads 3132802 Social-Cognitive Aspects of Interpretation: Didactic Approaches in Language Processing and English as a Second Language Difficulties in Dyslexia
Authors: Schnell Zsuzsanna
Abstract:
Background: The interpretation of written texts, language processing in the visual domain, in other words, atypical reading abilities, also known as dyslexia, is an ever-growing phenomenon in today’s societies and educational communities. The much-researched problem affects cognitive abilities and, coupled with normal intelligence normally manifests difficulties in the differentiation of sounds and orthography and in the holistic processing of written words. The factors of susceptibility are varied: social, cognitive psychological, and linguistic factors interact with each other. Methods: The research will explain the psycholinguistics of dyslexia on the basis of several empirical experiments and demonstrate how domain-general abilities of inhibition, retrieval from the mental lexicon, priming, phonological processing, and visual modality transfer affect successful language processing and interpretation. Interpretation of visual stimuli is hindered, and the problem seems to be embedded in a sociocultural, psycholinguistic, and cognitive background. This makes the picture even more complex, suggesting that the understanding and resolving of the issues of dyslexia has to be interdisciplinary, aided by several disciplines in the field of humanities and social sciences, and should be researched from an empirical approach, where the practical, educational corollaries can be analyzed on an applied basis. Aim and applicability: The lecture sheds light on the applied, cognitive aspects of interpretation, social cognitive traits of language processing, the mental underpinnings of cognitive interpretation strategies in different languages (namely, Hungarian and English), offering solutions with a few applied techniques for success in foreign language learning that can be useful advice for the developers of testing methodologies and measures across ESL teaching and testing platforms.Keywords: dyslexia, social cognition, transparency, modalities
Procedia PDF Downloads 842801 poly(N-Isopropylacrylamide)-Polyvinyl Alcohol Semi-Interpenetrating Network Hydrogel for Wound Dressing
Authors: Zi-Yan Liao, Shan-Yu Zhang, Ya-Xian Lin, Ya-Lun Lee, Shih-Chuan Huang, Hong-Ru Lin
Abstract:
Traditional wound dressings, such as gauze, bandages, etc., are easy to adhere to the tissue fluid exuded from the wound, causing secondary damage to the wound during removal. This study takes this as the idea to develop a hydrogel dressing, to explore that the dressing will not cause secondary damage to the wound when it is torn off, and at the same time, create an environment conducive to wound healing. First, the temperature-sensitive material N-isopropylacrylamide (NIPAAm) was used as the substrate. Due to its low mechanical properties, the hydrogel would break due to pulling during human activities. Polyvinyl alcohol (PVA) interpenetrates into it to enhance the mechanical properties, and a semi-interpenetration (semi-IPN) composed of poly(N-isopropylacrylamide) (PNIPAAm) and polyvinyl alcohol (PVA) was prepared by free radical polymerization. PNIPAAm was cross-linked with N,N'-methylenebisacrylamide (NMBA) in an ice bath in the presence of linear PVA, and tetramethylhexamethylenediamine (TEMED) was added as a promoter to speed up the gel formation. The polymerization stage was carried out at 16°C for 17 hours and washed with distilled water for three days after gel formation, and the water was changed several times in the middle to complete the preparation of semi-IPN hydrogel. Finally, various tests were used to analyze the effects of different ratios of PNIPAAm and PVA on semi-IPN hydrogels. In the swelling test, it was found that the maximum swelling ratio can reach about 50% under the environment of 21°C, and the higher the ratio of PVA, the more water can be absorbed. The saturated moisture content test results show that when more PVA is added, the higher saturated water content. The water vapor transmission rate test results show that the value of the semi-IPN hydrogel is about 57 g/m²/24hr, which is not much related to the proportion of PVA. It is found in the LCST test compared with the PNIPAAm hydrogel; the semi-IPN hydrogel possesses the same critical solution temperature (30-35°C). The semi-IPN hydrogel prepared in this study has a good effect on temperature response and has the characteristics of thermal sensitivity. It is expected that after improvement, it can be used in the treatment of surface wounds, replacing the traditional dressing shortcoming.Keywords: hydrogel, N-isopropylacrylamide, polyvinyl alcohol, hydrogel wound dressing, semi-interpenetrating polymer network
Procedia PDF Downloads 802800 Societal Resilience Assessment in the Context of Critical Infrastructure Protection
Authors: Hannah Rosenqvist, Fanny Guay
Abstract:
Critical infrastructure protection has been an important topic for several years. Programmes such as the European Programme for Critical Infrastructure Protection (EPCIP), Critical Infrastructure Warning Information Network (CIWIN) and the European Reference Network for Critical Infrastructure Protection (ENR-CIP) have been the pillars to the work done since 2006. However, measuring critical infrastructure resilience has not been an easy task. This has to do with the fact that the concept of resilience has several definitions and is applied in different domains such as engineering and social sciences. Since June 2015, the EU project IMPROVER has been focusing on developing a methodology for implementing a combination of societal, organizational and technological resilience concepts, in the hope to increase critical infrastructure resilience. For this paper, we performed research on how to include societal resilience as a form of measurement of the context of critical infrastructure resilience. Because one of the main purposes of critical infrastructure (CI) is to deliver services to the society, we believe that societal resilience is an important factor that should be considered when assessing the overall CI resilience. We found that existing methods for CI resilience assessment focus mainly on technical aspects and therefore that is was necessary to develop a resilience model that take social factors into account. The model developed within the project IMPROVER aims to include the community’s expectations of infrastructure operators as well as information sharing with the public and planning processes. By considering such aspects, the IMPROVER framework not only helps operators to increase the resilience of their infrastructures on the technical or organizational side, but aims to strengthen community resilience as a whole. This will further be achieved by taking interdependencies between critical infrastructures into consideration. The knowledge gained during this project will enrich current European policies and practices for improved disaster risk management. The framework for societal resilience analysis is based on three dimensions for societal resilience; coping capacity, adaptive capacity and transformative capacity which are capacities that have been recognized throughout a widespread literature review in the field. A set of indicators have been defined that describe a community’s maturity within these resilience dimensions. Further, the indicators are categorized into six community assets that need to be accessible and utilized in such a way that they allow responding to changes and unforeseen circumstances. We conclude that the societal resilience model developed within the project IMPROVER can give a good indication of the level of societal resilience to critical infrastructure operators.Keywords: community resilience, critical infrastructure protection, critical infrastructure resilience, societal resilience
Procedia PDF Downloads 2302799 Enhancing Traditional Saudi Designs Pattern Cutting to Integrate Them Into Current Clothing Offers
Authors: Faizah Almalki, Simeon Gill, Steve G. Hayes, Lisa Taylor
Abstract:
A core element of cultural identity is the traditional costumes that provide insight into the heritage that has been acquired over time. This heritage is apparent in the use of colour, the styles and the functions of the clothing and it also reflects the skills of those who created the items and the time taken to produce them. Modern flat pattern drafting methods for making garment patterns are simple in comparison to the relatively laborious traditional approaches that would require personal interaction with the wearer throughout the production process. The current study reflects on the main elements of the pattern cutting system and how this has evolved in Saudi Arabia to affect the design of the Sawan garment. Analysis of the traditional methods for constructing Sawan garments was undertaken through observation of the practice and the garments and consulting documented guidance. This provided a foundation through which to explore how modern technology can be applied to improve the process. In this research, modern methods are proposed for producing traditional Saudi garments more efficiently while retaining elements of the conventional style and design. The current study has documented the vital aspects of Sawan garment style. The result showed that the method had been used to take the body measurements and pattern making was elementary and offered simple geometric shape and the Sawan garment is composed of four pieces. Consequently, this research allows for classical pattern shapes to be embedded in garments now worn in Saudi Arabia and for the continuation of cultural heritage.Keywords: traditional Sawan garment technique, modern pattern cutting technique, the shape of the garment and software, Lectra Modaris
Procedia PDF Downloads 1322798 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery
Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene
Abstract:
Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.Keywords: multi-objective, analysis, data flow, freight delivery, methodology
Procedia PDF Downloads 1802797 An Ultrasonic Approach to Investigate the Effect of Aeration on Rheological Properties of Soft Biological Materials with Bubbles Embedded
Authors: Hussein M. Elmehdi
Abstract:
In this paper, we present the results of our recent experiments done to examine the effect of air bubbles, which were introduced to bio-samples during preparation, on the rheological properties of soft biological materials. To effectively achieve this, we three samples each prepared with differently. Our soft biological systems comprised of three types of flour dough systems made from different flour varieties with variable protein concentrations. The samples were investigated using ultrasonic waves operated at low frequency in transmission mode. The sample investigated included dough made from bread flour, wheat flour and all-purpose flour. During mixing, the main ingredient of the samples (the flour) was transformed into cohesive dough comprised of the continuous dough matrix and air pebbles. The rheological properties of such materials determine the quality of the end cereal product. Two ultrasonic parameters, the longitudinal velocity and attenuation coefficient were found to be very sensitive to properties such as the size of the occluded bubbles, and hence have great potential of providing quantitative evaluation of the properties of such materials. The results showed that the magnitudes of the ultrasonic velocity and attenuation coefficient peaked at optimum mixing times; the latter of which is taken as an indication of the end of the mixing process. There was an agreement between the results obtained by conventional rheology and ultrasound measurements, thus showing the potential of the use of ultrasound as an on-line quality control technique for dough-based products. The results of this work are explained with respect to the molecular changes occurring in the dough system as the mixing process proceeds; particular emphasis is placed on the presence of free water and bound water.Keywords: ultrasound, soft biological materials, velocity, attenuation
Procedia PDF Downloads 2772796 Survey Paper on Graph Coloring Problem and Its Application
Authors: Prateek Chharia, Biswa Bhusan Ghosh
Abstract:
Graph coloring is one of the prominent concepts in graph coloring. It can be defined as a coloring of the various regions of the graph such that all the constraints are fulfilled. In this paper various graphs coloring approaches like greedy coloring, Heuristic search for maximum independent set and graph coloring using edge table is described. Graph coloring can be used in various real time applications like student time tabling generation, Sudoku as a graph coloring problem, GSM phone network.Keywords: graph coloring, greedy coloring, heuristic search, edge table, sudoku as a graph coloring problem
Procedia PDF Downloads 5392795 The Impact of an Improved Strategic Partnership Programme on Organisational Performance and Growth of Firms in the Internet Protocol Television and Hybrid Fibre-Coaxial Broadband Industry
Authors: Collen T. Masilo, Brane Semolic, Pieter Steyn
Abstract:
The Internet Protocol Television (IPTV) and Hybrid Fibre-Coaxial (HFC) Broadband industrial sector landscape are rapidly changing and organisations within the industry need to stay competitive by exploring new business models so that they can be able to offer new services and products to customers. The business challenge in this industrial sector is meeting or exceeding high customer expectations across multiple content delivery modes. The increasing challenges in the IPTV and HFC broadband industrial sector encourage service providers to form strategic partnerships with key suppliers, marketing partners, advertisers, and technology partners. The need to form enterprise collaborative networks poses a challenge for any organisation in this sector, in selecting the right strategic partners who will ensure that the organisation’s services and products are marketed in new markets. Partners who will ensure that customers are efficiently supported by meeting and exceeding their expectations. Lastly, selecting cooperation partners who will represent the organisation in a positive manner, and contribute to improving the performance of the organisation. Companies in the IPTV and HFC broadband industrial sector tend to form informal partnerships with suppliers, vendors, system integrators and technology partners. Generally, partnerships are formed without thorough analysis of the real reason a company is forming collaborations, without proper evaluations of prospective partners using specific selection criteria, and with ineffective performance monitoring of partners to ensure that a firm gains real long term benefits from its partners and gains competitive advantage. Similar tendencies are illustrated in the research case study and are based on Skyline Communications, a global leader in end-to-end, multi-vendor network management and operational support systems (OSS) solutions. The organisation’s flagship product is the DataMiner network management platform used by many operators across multiple industries and can be referred to as a smart system that intelligently manages complex technology ecosystems for its customers in the IPTV and HFC broadband industry. The approach of the research is to develop the most efficient business model that can be deployed to improve a strategic partnership programme in order to significantly improve the performance and growth of organisations participating in a collaborative network in the IPTV and HFC broadband industrial sector. This involves proposing and implementing a new strategic partnership model and its main features within the industry which should bring about significant benefits for all involved companies to achieve value add and an optimal growth strategy. The proposed business model has been developed based on the research of existing relationships, value chains and business requirements in this industrial sector and validated in 'Skyline Communications'. The outputs of the business model have been demonstrated and evaluated in the research business case study the IPTV and HFC broadband service provider 'Skyline Communications'.Keywords: growth, partnership, selection criteria, value chain
Procedia PDF Downloads 1332794 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra
Authors: Bitewulign Mekonnen
Abstract:
Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network
Procedia PDF Downloads 942793 The Effects of Orientation on Energy and Plasticity of Metallic Crystalline-Amorphous Interface
Authors: Ehsan Alishahi, Chuang Deng
Abstract:
Commercial applications of bulk metallic glasses (BMGs) were restricted due to the sudden brittle failure mode which was the main drawback in these new class of materials. Therefore, crystalline-amorphous (C-A) composites were introduced as a toughening strategy in BMGs. In spite of numerous researches in the area of metallic C-A composites, the fundamental structure-property relation in these composites that are not exactly known yet. In this study, it is aimed to investigate the fundamental properties of crystalline-amorphous interface in a model system of Cu/CuZr by using molecular dynamics simulations. Several parameters including interface energy and mechanical properties were investigated by means of atomic models and employing Embedded Atom Method (EAM) potential function. It is found that the crystalline-amorphous interfacial energy weakly depends on the orientation of the crystalline layer, which is in stark contrast to that in a regular crystalline grain boundary. Additionally, the results showed that the interface controls the yielding of the crystalline-amorphous composites during uniaxial tension either by serving as sources for dislocation nucleation in the crystalline layer or triggering local shear transformation zones in amorphous layer. The critical resolved shear stress required to nucleate the first dislocation is also found to strongly depend on the crystalline orientation. Furthermore, it is found that the interaction between dislocations and shear localization at the crystalline-amorphous interface oriented in different directions can lead to a change in the deformation mode. For instance, while the dislocation and shear banding are aligned to each other in {0 0 1} interface plane, the misorientation angle between these failure mechanisms causing more homogeneous deformation in {1 1 0} and {1 1 1} crystalline-amorphous interfaces. These results should help clarify the failure mechanism of crystalline-amorphous composites under various loading conditions.Keywords: crystalline-amorphous, composites, orientation, plasticity
Procedia PDF Downloads 2932792 Political Communication in Twitter Interactions between Government, News Media and Citizens in Mexico
Authors: Jorge Cortés, Alejandra Martínez, Carlos Pérez, Anaid Simón
Abstract:
The presence of government, news media, and general citizenry in social media allows considering interactions between them as a form of political communication (i.e. the public exchange of contradictory discourses about politics). Twitter’s asymmetrical following model (users can follow, mention or reply to other users that do not follow them) could foster alternative democratic practices and have an impact on Mexican political culture, which has been marked by a lack of direct communication channels between these actors. The research aim is to assess Twitter’s role in political communication practices through the analysis of interaction dynamics between government, news media, and citizens by extracting and visualizing data from Twitter’s API to observe general behavior patterns. The hypothesis is that regardless the fact that Twitter’s features enable direct and horizontal interactions between actors, users repeat traditional dynamics of interaction, without taking full advantage of the possibilities of this medium. Through an interdisciplinary team including Communication Strategies, Information Design, and Interaction Systems, the activity on Twitter generated by the controversy over the presence of Uber in Mexico City was analysed; an issue of public interest, involving aspects such as public opinion, economic interests and a legal dimension. This research includes techniques from social network analysis (SNA), a methodological approach focused on the comprehension of the relationships between actors through the visual representation and measurement of network characteristics. The analysis of the Uber event comprised data extraction, data categorization, corpus construction, corpus visualization and analysis. On the recovery stage TAGS, a Google Sheet template, was used to extract tweets that included the hashtags #UberSeQueda and #UberSeVa, posts containing the string Uber and tweets directed to @uber_mx. Using scripts written in Python, the data was filtered, discarding tweets with no interaction (replies, retweets or mentions) and locations outside of México. Considerations regarding bots and the omission of anecdotal posts were also taken into account. The utility of graphs to observe interactions of political communication in general was confirmed by the analysis of visualizations generated with programs such as Gephi and NodeXL. However, some aspects require improvements to obtain more useful visual representations for this type of research. For example, link¬crossings complicates following the direction of an interaction forcing users to manipulate the graph to see it clearly. It was concluded that some practices prevalent in political communication in Mexico are replicated in Twitter. Media actors tend to group together instead of interact with others. The political system tends to tweet as an advertising strategy rather than to generate dialogue. However, some actors were identified as bridges establishing communication between the three spheres, generating a more democratic exercise and taking advantage of Twitter’s possibilities. Although interactions in Twitter could become an alternative to political communication, this potential depends on the intentions of the participants and to what extent they are aiming for collaborative and direct communications. Further research is needed to get a deeper understanding on the political behavior of Twitter users and the possibilities of SNA for its analysis.Keywords: interaction, political communication, social network analysis, Twitter
Procedia PDF Downloads 2212791 Probabilistic Life Cycle Assessment of the Nano Membrane Toilet
Authors: A. Anastasopoulou, A. Kolios, T. Somorin, A. Sowale, Y. Jiang, B. Fidalgo, A. Parker, L. Williams, M. Collins, E. J. McAdam, S. Tyrrel
Abstract:
Developing countries are nowadays confronted with great challenges related to domestic sanitation services in view of the imminent water scarcity. Contemporary sanitation technologies established in these countries are likely to pose health risks unless waste management standards are followed properly. This paper provides a solution to sustainable sanitation with the development of an innovative toilet system, called Nano Membrane Toilet (NMT), which has been developed by Cranfield University and sponsored by the Bill & Melinda Gates Foundation. The particular technology converts human faeces into energy through gasification and provides treated wastewater from urine through membrane filtration. In order to evaluate the environmental profile of the NMT system, a deterministic life cycle assessment (LCA) has been conducted in SimaPro software employing the Ecoinvent v3.3 database. The particular study has determined the most contributory factors to the environmental footprint of the NMT system. However, as sensitivity analysis has identified certain critical operating parameters for the robustness of the LCA results, adopting a stochastic approach to the Life Cycle Inventory (LCI) will comprehensively capture the input data uncertainty and enhance the credibility of the LCA outcome. For that purpose, Monte Carlo simulations, in combination with an artificial neural network (ANN) model, have been conducted for the input parameters of raw material, produced electricity, NOX emissions, amount of ash and transportation of fertilizer. The given analysis has provided the distribution and the confidence intervals of the selected impact categories and, in turn, more credible conclusions are drawn on the respective LCIA (Life Cycle Impact Assessment) profile of NMT system. Last but not least, the specific study will also yield essential insights into the methodological framework that can be adopted in the environmental impact assessment of other complex engineering systems subject to a high level of input data uncertainty.Keywords: sanitation systems, nano-membrane toilet, lca, stochastic uncertainty analysis, Monte Carlo simulations, artificial neural network
Procedia PDF Downloads 2252790 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea
Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim
Abstract:
Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.Keywords: deep learning, algae concentration, remote sensing, satellite
Procedia PDF Downloads 1832789 Orthogonal Basis Extreme Learning Algorithm and Function Approximation
Abstract:
A new algorithm for single hidden layer feedforward neural networks (SLFN), Orthogonal Basis Extreme Learning (OBEL) algorithm, is proposed and the algorithm derivation is given in the paper. The algorithm can decide both the NNs parameters and the neuron number of hidden layer(s) during training while providing extreme fast learning speed. It will provide a practical way to develop NNs. The simulation results of function approximation showed that the algorithm is effective and feasible with good accuracy and adaptability.Keywords: neural network, orthogonal basis extreme learning, function approximation
Procedia PDF Downloads 5342788 Automatic Detection of Sugarcane Diseases: A Computer Vision-Based Approach
Authors: Himanshu Sharma, Karthik Kumar, Harish Kumar
Abstract:
The major problem in crop cultivation is the occurrence of multiple crop diseases. During the growth stage, timely identification of crop diseases is paramount to ensure the high yield of crops, lower production costs, and minimize pesticide usage. In most cases, crop diseases produce observable characteristics and symptoms. The Surveyors usually diagnose crop diseases when they walk through the fields. However, surveyor inspections tend to be biased and error-prone due to the nature of the monotonous task and the subjectivity of individuals. In addition, visual inspection of each leaf or plant is costly, time-consuming, and labour-intensive. Furthermore, the plant pathologists and experts who can often identify the disease within the plant according to their symptoms in early stages are not readily available in remote regions. Therefore, this study specifically addressed early detection of leaf scald, red rot, and eyespot types of diseases within sugarcane plants. The study proposes a computer vision-based approach using a convolutional neural network (CNN) for automatic identification of crop diseases. To facilitate this, firstly, images of sugarcane diseases were taken from google without modifying the scene, background, or controlling the illumination to build the training dataset. Then, the testing dataset was developed based on the real-time collected images from the sugarcane field from India. Then, the image dataset is pre-processed for feature extraction and selection. Finally, the CNN-based Visual Geometry Group (VGG) model was deployed on the training and testing dataset to classify the images into diseased and healthy sugarcane plants and measure the model's performance using various parameters, i.e., accuracy, sensitivity, specificity, and F1-score. The promising result of the proposed model lays the groundwork for the automatic early detection of sugarcane disease. The proposed research directly sustains an increase in crop yield.Keywords: automatic classification, computer vision, convolutional neural network, image processing, sugarcane disease, visual geometry group
Procedia PDF Downloads 116