Search results for: edge cloud
1011 Educational Knowledge Transfer in Indigenous Mexican Areas Using Cloud Computing
Authors: L. R. Valencia Pérez, J. M. Peña Aguilar, A. Lamadrid Álvarez, A. Pastrana Palma, H. F. Valencia Pérez, M. Vivanco Vargas
Abstract:
This work proposes a Cooperation-Competitive (Coopetitive) approach that allows coordinated work among the Secretary of Public Education (SEP), the Autonomous University of Querétaro (UAQ) and government funds from National Council for Science and Technology (CONACYT) or some other international organizations. To work on an overall knowledge transfer strategy with e-learning over the Cloud, where experts in junior high and high school education, working in multidisciplinary teams, perform analysis, evaluation, design, production, validation and knowledge transfer at large scale using a Cloud Computing platform. Allowing teachers and students to have all the information required to ensure a homologated nationally knowledge of topics such as mathematics, statistics, chemistry, history, ethics, civism, etc. This work will start with a pilot test in Spanish and initially in two regional dialects Otomí and Náhuatl. Otomí has more than 285,000 speaking indigenes in Queretaro and Mexico´s central region. Náhuatl is number one indigenous dialect spoken in Mexico with more than 1,550,000 indigenes. The phase one of the project takes into account negotiations with indigenous tribes from different regions, and the Information and Communication technologies to deliver the knowledge to the indigenous schools in their native dialect. The methodology includes the following main milestones: Identification of the indigenous areas where Otomí and Náhuatl are the spoken dialects, research with the SEP the location of actual indigenous schools, analysis and inventory or current schools conditions, negotiation with tribe chiefs, analysis of the technological communication requirements to reach the indigenous communities, identification and inventory of local teachers technology knowledge, selection of a pilot topic, analysis of actual student competence with traditional education system, identification of local translators, design of the e-learning platform, design of the multimedia resources and storage strategy for “Cloud Computing”, translation of the topic to both dialects, Indigenous teachers training, pilot test, course release, project follow up, analysis of student requirements for the new technological platform, definition of a new and improved proposal with greater reach in topics and regions. Importance of phase one of the project is multiple, it includes the proposal of a working technological scheme, focusing in the cultural impact in Mexico so that indigenous tribes can improve their knowledge about new forms of crop improvement, home storage technologies, proven home remedies for common diseases, ways of preparing foods containing major nutrients, disclose strengths and weaknesses of each region, communicating through cloud computing platforms offering regional products and opening communication spaces for inter-indigenous cultural exchange.Keywords: Mexicans indigenous tribes, education, knowledge transfer, cloud computing, otomi, Náhuatl, language
Procedia PDF Downloads 4041010 Deep-Learning to Generation of Weights for Image Captioning Using Part-of-Speech Approach
Authors: Tiago do Carmo Nogueira, Cássio Dener Noronha Vinhal, Gélson da Cruz Júnior, Matheus Rudolfo Diedrich Ullmann
Abstract:
Generating automatic image descriptions through natural language is a challenging task. Image captioning is a task that consistently describes an image by combining computer vision and natural language processing techniques. To accomplish this task, cutting-edge models use encoder-decoder structures. Thus, Convolutional Neural Networks (CNN) are used to extract the characteristics of the images, and Recurrent Neural Networks (RNN) generate the descriptive sentences of the images. However, cutting-edge approaches still suffer from problems of generating incorrect captions and accumulating errors in the decoders. To solve this problem, we propose a model based on the encoder-decoder structure, introducing a module that generates the weights according to the importance of the word to form the sentence, using the part-of-speech (PoS). Thus, the results demonstrate that our model surpasses state-of-the-art models.Keywords: gated recurrent units, caption generation, convolutional neural network, part-of-speech
Procedia PDF Downloads 1021009 An Efficient Encryption Scheme Using DWT and Arnold Transforms
Authors: Ali Abdrhman M. Ukasha
Abstract:
Data security needed in data transmission, storage, and communication to ensure the security. The color image is decomposed into red, green, and blue channels. The blue and green channels are compressed using 3-levels discrete wavelet transform. The Arnold transform uses to changes the locations of red image channel pixels as image scrambling process. Then all these channels are encrypted separately using a key image that has same original size and is generating using private keys and modulo operations. Performing the X-OR and modulo operations between the encrypted channels images for image pixel values change purpose. The extracted contours of color image recovery can be obtained with accepted level of distortion using Canny edge detector. Experiments have demonstrated that proposed algorithm can fully encrypt 2D color image and completely reconstructed without any distortion. It has shown that the color image can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.Keywords: color image, wavelet transform, edge detector, Arnold transform, lossy image encryption
Procedia PDF Downloads 4821008 Cyber Security Enhancement via Software Defined Pseudo-Random Private IP Address Hopping
Authors: Andre Slonopas, Zona Kostic, Warren Thompson
Abstract:
Obfuscation is one of the most useful tools to prevent network compromise. Previous research focused on the obfuscation of the network communications between external-facing edge devices. This work proposes the use of two edge devices, external and internal facing, which communicate via private IPv4 addresses in a software-defined pseudo-random IP hopping. This methodology does not require additional IP addresses and/or resources to implement. Statistical analyses demonstrate that the hopping surface must be at least 1e3 IP addresses in size with a broad standard deviation to minimize the possibility of coincidence of monitored and communication IPs. The probability of breaking the hopping algorithm requires a collection of at least 1e6 samples, which for large hopping surfaces will take years to collect. The probability of dropped packets is controlled via memory buffers and the frequency of hops and can be reduced to levels acceptable for video streaming. This methodology provides an impenetrable layer of security ideal for information and supervisory control and data acquisition systems.Keywords: moving target defense, cybersecurity, network security, hopping randomization, software defined network, network security theory
Procedia PDF Downloads 1851007 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices
Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese
Abstract:
Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis
Procedia PDF Downloads 1761006 On Cloud Computing: A Review of the Features
Authors: Assem Abdel Hamed Mousa
Abstract:
The Internet of Things probably already influences your life. And if it doesn’t, it soon will, say computer scientists; Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives. Alan Kay of Apple calls this "Third Paradigm" computing. Ubiquitous computing is essentially the term for human interaction with computers in virtually everything. Ubiquitous computing is roughly the opposite of virtual reality. Where virtual reality puts people inside a computer-generated world, ubiquitous computing forces the computer to live out here in the world with people. Virtual reality is primarily a horse power problem; ubiquitous computing is a very difficult integration of human factors, computer science, engineering, and social sciences. The approach: Activate the world. Provide hundreds of wireless computing devices per person per office, of all scales (from 1" displays to wall sized). This has required new work in operating systems, user interfaces, networks, wireless, displays, and many other areas. We call our work "ubiquitous computing". This is different from PDA's, dynabooks, or information at your fingertips. It is invisible; everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere. The initial incarnation of ubiquitous computing was in the form of "tabs", "pads", and "boards" built at Xerox PARC, 1988-1994. Several papers describe this work, and there are web pages for the Tabs and for the Boards (which are a commercial product now): Ubiquitous computing will drastically reduce the cost of digital devices and tasks for the average consumer. With labor intensive components such as processors and hard drives stored in the remote data centers powering the cloud , and with pooled resources giving individual consumers the benefits of economies of scale, monthly fees similar to a cable bill for services that feed into a consumer’s phone.Keywords: internet, cloud computing, ubiquitous computing, big data
Procedia PDF Downloads 3821005 Internet of Things Based Patient Health Monitoring System
Authors: G. Yoga Sairam Teja, K. Harsha Vardhan, A. Vinay Kumar, K. Nithish Kumar, Ch. Shanthi Priyag
Abstract:
The emergence of the Internet of Things (IoT) has facilitated better device control and monitoring in the modern world. The constant monitoring of a patient would be drastically altered by the usage of IoT in healthcare. As we've seen in the case of the COVID-19 pandemic, it's important to keep oneself untouched while continuously checking on the patient's heart rate and temperature. Additionally, patients with paralysis should be closely watched, especially if they are elderly and in need of special care. Our "IoT BASED PATIENT HEALTH MONITORING SYSTEM" project uses IoT to track patient health conditions in an effort to address these issues. In this project, the main board is an 8051 microcontroller that connects a number of sensors, including a heart rate sensor, a temperature sensor (LM-35), and a saline water measuring circuit. These sensors are connected via an ESP832 (WiFi) module, which enables the sending of recorded data directly to the cloud so that the patient's health status can be regularly monitored. An LCD is used to monitor the data in offline mode, and a buzzer will sound if any variation from the regular readings occurs. The data in the cloud may be viewed as a graph, making it simple for a user to spot any unusual conditions.Keywords: IoT, ESP8266, 8051 microcontrollers, sensors
Procedia PDF Downloads 871004 Advancing in Cricket Analytics: Novel Approaches for Pitch and Ball Detection Employing OpenCV and YOLOV8
Authors: Pratham Madnur, Prathamkumar Shetty, Sneha Varur, Gouri Parashetti
Abstract:
In order to overcome conventional obstacles, this research paper investigates novel approaches for cricket pitch and ball detection that make use of cutting-edge technologies. The research integrates OpenCV for pitch inspection and modifies the YOLOv8 model for cricket ball detection in order to overcome the shortcomings of manual pitch assessment and traditional ball detection techniques. To ensure flexibility in a range of pitch environments, the pitch detection method leverages OpenCV’s color space transformation, contour extraction, and accurate color range defining features. Regarding ball detection, the YOLOv8 model emphasizes the preservation of minor object details to improve accuracy and is specifically trained to the unique properties of cricket balls. The methods are more reliable because of the careful preparation of the datasets, which include novel ball and pitch information. These cutting-edge methods not only improve cricket analytics but also set the stage for flexible methods in more general sports technology applications.Keywords: OpenCV, YOLOv8, cricket, custom dataset, computer vision, sports
Procedia PDF Downloads 791003 Flow Separation Control on an Aerofoil Using Grooves
Authors: Neel K. Shah
Abstract:
Wind tunnel tests have been performed at The University of Manchester to investigate the impact of surface grooves of a trapezoidal planform on flow separation on a symmetrical aerofoil. A spanwise array of the grooves has been applied around the maximum thickness location of the upper surface of an NACA-0015 aerofoil. The aerofoil has been tested in a two-dimensional set-up in a low-speed wind tunnel at an angle of attack (AoA) of 3° and a chord-based Reynolds number (Re) of ~2.7 x 105. A laminar separation bubble developed on the aerofoil at low AoA. It has been found that the grooves shorten the streamwise extent of the separation bubble by shedding a pair of counter-rotating vortices. However, the increase in leading-edge suction due to the shorter bubble is not significant since the creation of the grooves results in a decrease of surface curvature and an increase in blockage (increase in surface pressure). Additionally, the increased flow mixing by the grooves thickens the boundary layer near the trailing edge of the aerofoil also contributes to this limitation. As a result of these competing effects, the improvement in the pressure-lift and pressure-drag coefficients are small, i.e., by ~1.30% and ~0.30%, respectively, at 3° AoA. Crosswire anemometry shows that the grooves increase turbulence intensity and Reynolds stresses in the wake, thus indicating an increase in viscous drag.Keywords: aerofoil flow control, flow separation, grooves, vortices
Procedia PDF Downloads 3151002 3D Human Reconstruction over Cloud Based Image Data via AI and Machine Learning
Authors: Kaushik Sathupadi, Sandesh Achar
Abstract:
Human action recognition modeling is a critical task in machine learning. These systems require better techniques for recognizing body parts and selecting optimal features based on vision sensors to identify complex action patterns efficiently. Still, there is a considerable gap and challenges between images and videos, such as brightness, motion variation, and random clutters. This paper proposes a robust approach for classifying human actions over cloud-based image data. First, we apply pre-processing and detection, human and outer shape detection techniques. Next, we extract valuable information in terms of cues. We extract two distinct features: fuzzy local binary patterns and sequence representation. Then, we applied a greedy, randomized adaptive search procedure for data optimization and dimension reduction, and for classification, we used a random forest. We tested our model on two benchmark datasets, AAMAZ and the KTH Multi-view football datasets. Our HMR framework significantly outperforms the other state-of-the-art approaches and achieves a better recognition rate of 91% and 89.6% over the AAMAZ and KTH multi-view football datasets, respectively.Keywords: computer vision, human motion analysis, random forest, machine learning
Procedia PDF Downloads 361001 Cloud-Based Multiresolution Geodata Cube for Efficient Raster Data Visualization and Analysis
Authors: Lassi Lehto, Jaakko Kahkonen, Juha Oksanen, Tapani Sarjakoski
Abstract:
The use of raster-formatted data sets in geospatial analysis is increasing rapidly. At the same time, geographic data are being introduced into disciplines outside the traditional domain of geoinformatics, like climate change, intelligent transport, and immigration studies. These developments call for better methods to deliver raster geodata in an efficient and easy-to-use manner. Data cube technologies have traditionally been used in the geospatial domain for managing Earth Observation data sets that have strict requirements for effective handling of time series. The same approach and methodologies can also be applied in managing other types of geospatial data sets. A cloud service-based geodata cube, called GeoCubes Finland, has been developed to support online delivery and analysis of most important geospatial data sets with national coverage. The main target group of the service is the academic research institutes in the country. The most significant aspects of the GeoCubes data repository include the use of multiple resolution levels, cloud-optimized file structure, and a customized, flexible content access API. Input data sets are pre-processed while being ingested into the repository to bring them into a harmonized form in aspects like georeferencing, sampling resolutions, spatial subdivision, and value encoding. All the resolution levels are created using an appropriate generalization method, selected depending on the nature of the source data set. Multiple pre-processed resolutions enable new kinds of online analysis approaches to be introduced. Analysis processes based on interactive visual exploration can be effectively carried out, as the level of resolution most close to the visual scale can always be used. In the same way, statistical analysis can be carried out on resolution levels that best reflect the scale of the phenomenon being studied. Access times remain close to constant, independent of the scale applied in the application. The cloud service-based approach, applied in the GeoCubes Finland repository, enables analysis operations to be performed on the server platform, thus making high-performance computing facilities easily accessible. The developed GeoCubes API supports this kind of approach for online analysis. The use of cloud-optimized file structures in data storage enables the fast extraction of subareas. The access API allows for the use of vector-formatted administrative areas and user-defined polygons as definitions of subareas for data retrieval. Administrative areas of the country in four levels are available readily from the GeoCubes platform. In addition to direct delivery of raster data, the service also supports the so-called virtual file format, in which only a small text file is first downloaded. The text file contains links to the raster content on the service platform. The actual raster data is downloaded on demand, from the spatial area and resolution level required in each stage of the application. By the geodata cube approach, pre-harmonized geospatial data sets are made accessible to new categories of inexperienced users in an easy-to-use manner. At the same time, the multiresolution nature of the GeoCubes repository facilitates expert users to introduce new kinds of interactive online analysis operations.Keywords: cloud service, geodata cube, multiresolution, raster geodata
Procedia PDF Downloads 1351000 Integrating Building Information Modeling into Facilities Management Operations
Authors: Mojtaba Valinejadshoubi, Azin Shakibabarough, Ashutosh Bagchi
Abstract:
Facilities such as residential buildings, office buildings, and hospitals house large density of occupants. Therefore, a low-cost facility management program (FMP) should be used to provide a satisfactory built environment for these occupants. Facility management (FM) has been recently used in building projects as a critical task. It has been effective in reducing operation and maintenance cost of these facilities. Issues of information integration and visualization capabilities are critical for reducing the complexity and cost of FM. Building information modeling (BIM) can be used as a strong visual modeling tool and database in FM. The main objective of this study is to examine the applicability of BIM in the FM process during a building’s operational phase. For this purpose, a seven-storey office building is modeled Autodesk Revit software. Authors integrated the cloud-based environment using a visual programming tool, Dynamo, for the purpose of having a real-time cloud-based communication between the facility managers and the participants involved in the project. An appropriate and effective integrated data source and visual model such as BIM can reduce a building’s operational and maintenance costs by managing the building life cycle properly.Keywords: building information modeling, facility management, operational phase, building life cycle
Procedia PDF Downloads 152999 Creating Energy Sustainability in an Enterprise
Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala
Abstract:
As we enter the new era of Artificial Intelligence (AI) and Cloud Computing, we mostly rely on the Machine and Natural Language Processing capabilities of AI, and Energy Efficient Hardware and Software Devices in almost every industry sector. In these industry sectors, much emphasis is on developing new and innovative methods for producing and conserving energy and sustaining the depletion of natural resources. The core pillars of sustainability are economic, environmental, and social, which is also informally referred to as the 3 P's (People, Planet and Profits). The 3 P's play a vital role in creating a core Sustainability Model in the Enterprise. Natural resources are continually being depleted, so there is more focus and growing demand for renewable energy. With this growing demand, there is also a growing concern in many industries on how to reduce carbon emissions and conserve natural resources while adopting sustainability in corporate business models and policies. In our paper, we would like to discuss the driving forces such as Climate changes, Natural Disasters, Pandemic, Disruptive Technologies, Corporate Policies, Scaled Business Models and Emerging social media and AI platforms that influence the 3 main pillars of Sustainability (3P’s). Through this paper, we would like to bring an overall perspective on enterprise strategies and the primary focus on bringing cultural shifts in adapting energy-efficient operational models. Overall, many industries across the globe are incorporating core sustainability principles such as reducing energy costs, reducing greenhouse gas (GHG) emissions, reducing waste and increasing recycling, adopting advanced monitoring and metering infrastructure, reducing server footprint and compute resources (Shared IT services, Cloud computing, and Application Modernization) with the vision for a sustainable environment.Keywords: climate change, pandemic, disruptive technology, government policies, business model, machine learning and natural language processing, AI, social media platform, cloud computing, advanced monitoring, metering infrastructure
Procedia PDF Downloads 111998 Modeling Bessel Beams and Their Discrete Superpositions from the Generalized Lorenz-Mie Theory to Calculate Optical Forces over Spherical Dielectric Particles
Authors: Leonardo A. Ambrosio, Carlos. H. Silva Santos, Ivan E. L. Rodrigues, Ayumi K. de Campos, Leandro A. Machado
Abstract:
In this work, we propose an algorithm developed under Python language for the modeling of ordinary scalar Bessel beams and their discrete superpositions and subsequent calculation of optical forces exerted over dielectric spherical particles. The mathematical formalism, based on the generalized Lorenz-Mie theory, is implemented in Python for its large number of free mathematical (as SciPy and NumPy), data visualization (Matplotlib and PyJamas) and multiprocessing libraries. We also propose an approach, provided by a synchronized Software as Service (SaaS) in cloud computing, to develop a user interface embedded on a mobile application, thus providing users with the necessary means to easily introduce desired unknowns and parameters and see the graphical outcomes of the simulations right at their mobile devices. Initially proposed as a free Android-based application, such an App enables data post-processing in cloud-based architectures and visualization of results, figures and numerical tables.Keywords: Bessel Beams and Frozen Waves, Generalized Lorenz-Mie Theory, Numerical Methods, optical forces
Procedia PDF Downloads 380997 Synthesis of TiO₂/Graphene Nanocomposites with Excellent Visible-Light Photocatalytic Activity Based on Chemical Exfoliation Method
Authors: Nhan N. T. Ton, Anh T. N. Dao, Kouichirou Katou, Toshiaki Taniike
Abstract:
Facile electron-hole recombination and the broad band gap are two major drawbacks of titanium dioxide (TiO₂) when applied in visible-light photocatalysis. Hybridization of TiO₂ with graphene is a promising strategy to lessen these pitfalls. Recently, there have been many reports on the synthesis of TiO₂/graphene nanocomposites, in most of which graphene oxide (GO) was used as a starting material. However, the reduction of GO introduced a large number of defects on the graphene framework. In addition, the sensitivity of titanium alkoxide to water (GO usually contains) significantly obstructs the uniform and controlled growth of TiO₂ on graphene. Here, we demonstrate a novel technique to synthesize TiO₂/graphene nanocomposites without the use of GO. Graphene dispersion was obtained through the chemical exfoliation of graphite in titanium tetra-n-butoxide with the aid of ultrasonication. The dispersion was directly used for the sol-gel reaction in the presence of different catalysts. A TiO₂/reduced graphene oxide (TiO₂/rGO) nanocomposite, which was prepared by a solvothermal method from GO, and the commercial TiO₂-P25 were used as references. It was found that titanium alkoxide afforded the graphene dispersion of a high quality in terms of a trace amount of defects and a few layers of dispersed graphene. Moreover, the sol-gel reaction from this dispersion led to TiO₂/graphene nanocomposites featured with promising characteristics for visible-light photocatalysts including: (I) the formation of a TiO₂ nano layer (thickness ranging from 1 nm to 5 nm) that uniformly and thinly covered graphene sheets, (II) a trace amount of defects on the graphene framework (low ID/IG ratio: 0.21), (III) a significant extension of the absorption edge into the visible light region (a remarkable extension of the absorption edge to 578 nm beside the usual edge at 360 nm), and (IV) a dramatic suppression of electron-hole recombination (the lowest photoluminescence intensity compared to reference samples). These advantages were successfully demonstrated in the photocatalytic decomposition of methylene blue under visible light irradiation. The TiO₂/graphene nanocomposites exhibited 15 and 5 times higher activity than TiO₂-P25 and the TiO₂/rGO nanocomposite, respectively.Keywords: chemical exfoliation, photocatalyst, TiO₂/graphene, sol-gel reaction
Procedia PDF Downloads 160996 SiO2-Ag+Chlorex vs SilverSulfaDiazine: An 'in vitro' and 'in vivo' Silver Challenge
Authors: Roberto Cassino, Valeria Dissette, Carlo Alberto Bignozzi, Daniele Pazzi
Abstract:
Background and Aims: The aim of this work was to investigate, both ‘in vitro’ and ‘in vivo’, if the new SCX technology (SiO2-Ag+Chlorex) can easily defeat infections and it is really more effective than SSD (SilverSulfaDiazine). ‘In vitro’ methods: we tested ‘in vitro’ the effectiveness of both silver materials using a pool of 5 strains: Pseudomonas Aeruginosa, Staphylococcus aureus, Escherichia Coli, Enterococcus hirae and Candida Albicans. 100 µl of this pool have been seeded on Petri dishes and kept for 24 hours in incubation at 37 C°. ‘In vivo’ methods: we enrolled patients with multiple infectious chronic wounds (according with cutting & harding criteria for infection); after a qualitative evaluation of the wounds bacterial population, taking a sample by plug, we included in the study 6 patients for a total of 10 wounds, infected by one or more of the microorganisms used for the ‘in vitro’ test. The protocol consisted of a treatment with a spray powder of SSD every 48 hours for 14 days; in case of worsening we should have to start a new treatment with a spray powder containing silicon dioxide, ionic silver and chlorexidine (SiO2-Ag+Chlorex) every 48 hours for 14 days. We evaluated the number of clinical signs of infection and the disappearance or not of the wound edge erithema. ‘In vitro’ results: SSD demonstrated a wide zone of inhibition within 24 hours, but after 5 days there was no more signs of inhibition; on the contrary SCX had a good inhibition ring that lasted more than 5 days. ‘In vivo’ results: all wounds treated with SSD got worse; the signs of infection increased and the wound edge erithema did not disappear. According with the protocol, we treated then all wounds with SCX and they all improved within the period of observation with complete disappearance of clinical signs of infection and no more wound edge erithema. Conclusions: the study demonstrated the effectiveness of SiO2-Ag+Chlorex, especially in terms of long lasting antimicrobial action. We had the same results ‘in vitro’, so that there has been a perfect correspondence between the laboratory outcomes and the clinical ones.Keywords: chronic wounds, infections, ionic silver, SSD
Procedia PDF Downloads 334995 Free Vibration Characteristics of Nanoplates with Various Edge Supports Incorporating Surface Free Energy Effects
Authors: Saeid Sahmani
Abstract:
Due to size-dependent behavior of nanostrustures, the classical continuum models are not applicable for the analyses at this submicrion size. Surface stress effect is one of the most important matters which make the nanoscale structures to have different properties compared to the conventional structures due to high surface to volume ratio. In the present study, free vibration characteristics of nanoplates are investigated including surface stress effects. To this end, non-classical plate model based on Gurtin-Murdoch elasticity theory is proposed to evaluate the surface stress effects on the vibrational behavior of nanoplates subjected to different boundary conditions. Generalized differential quadrature (GDQ) method is employed to discretize the governing non-classical differential equations along with various edge supports. Selected numerical results are given to demonstrate the distinction between the behavior of nanoplates predicted by the classical and present non-classical plate models that leads to illustrate the great influence of surface stress effect. It is observed that this influence quite depends on the magnitude of the surface elastic constants which are relevant to the selected material.Keywords: nanomechanics, surface stress, free vibration, GDQ method, small scale effect
Procedia PDF Downloads 356994 Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment
Authors: Shuen-Tai Wang, Fang-An Kuo, Chau-Yi Chou, Yu-Bin Fang
Abstract:
2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.Keywords: artificial intelligence, machine learning, deep learning, convolutional neural networks
Procedia PDF Downloads 211993 Green Innovation and Artificial Intelligence in Service
Authors: Fatemeh Khalili Varnamkhasti
Abstract:
Numerous nations have recognized the critical ought to address natural issues, such as discuss contamination, squander transfer, worldwide warming, and common asset consumption, through the application of green innovation. The rise of cleverly advances has driven mechanical basic changes that will offer assistance accomplish carbon decrease. Manufactured insights (AI) innovation is an imperative portion of digitalization, giving unused mechanical apparatuses and bearings for the moo carbon advancement of endeavors. Quickening the brilliantly change of fabricating industry is an critical vital choice to realize the green advancement change. The reason why fabricating insights can advance the advancement of green advancement execution is that fabricating insights is conducive to the generation of "innovation advancement impact" and "fetched decrease impact" so as to advance green innovation advancement, at that point viably increment the alluring yields and essentially diminish the undesirable yields. AI improvement will boost GTI as it were when the escalated of natural direction and organization environment is over a certain edge esteem. In any case, the AI improvement spoken to by mechanical robot applications still has no self-evident impact on GTI, indeed, when the R&D venture surpasses a certain edge.Keywords: greenhouse gas emissions, green infrastructure, artificial intelligence, environmental protection
Procedia PDF Downloads 70992 Assessment of Heavy Metal Contamination in Roadside Soils along Shenyang-Dalian Highway in Liaoning Province, China
Authors: Zhang Hui, Wu Caiqiu, Yuan Xuyin, Qiu Jie, Zhang Hanpei
Abstract:
The heavy metal contaminations were determined with a detailed soil survey in roadside soils along Shenyang-Dalian Highway of Liaoning Province (China) and Pb, Cu, Cd, Ni and Zn were analyzed using the atomic absorption spectrophotometric method. The average concentration of Pb, Cu, Cd, Ni and Zn in roadside soils was determined to be 43.8, 26.5, 0.119, 32.1, 71.3 mg/kg respectively, and all of the heavy metal contents were higher than the background values. Different heavy metal distribution regularity was found in different land use type of roadside soil, there was an obvious peak of heavy concentration at 25m from road edge in the farmland, while in the forest and orchard soil, all heavy metals gradually decreased with the increase of distance from road edge and conformed to the exponential model. Furthermore, the heavy metal contents of heavy metals except Cd were markedly increased compared with those in 1999 and 2007, and the heavy metals concentrations of Shenyang- Dalian Highway were considered medium or low in comparison with those in other cities around the world. The assessment of heavy metal contamination of roadside soils illustrated a common low pollution for all heavy metal and recommended that more attention should be paid to Pb contamination in roadside soils in Shenyang-Dalian Highway.Keywords: heavy metal contamination, roadside, highway, Nemerow Pollution Index
Procedia PDF Downloads 266991 The Role of Knowledge Management in Global Software Engineering
Authors: Samina Khalid, Tehmina Khalil, Smeea Arshad
Abstract:
Knowledge management is essential ingredient of successful coordination in globally distributed software engineering. Various frameworks, KMSs, and tools have been proposed to foster coordination and communication between virtual teams but practical implementation of these solutions has not been found. Organizations have to face challenges to implement knowledge management system. For this purpose at first, a literature review is arranged to investigate about challenges that restrict organizations to implement KMS and then by taking in account these challenges a problem of need of integrated solution in the form of standardized KMS that can easily store tacit and explicit knowledge, has traced down to facilitate coordination and collaboration among virtual teams. Literature review has been already shown that knowledge is a complex perception with profound meanings, and one of the most important resources that contributes to the competitive advantage of an organization. In order to meet the different challenges caused by not properly managing knowledge related to projects among virtual teams in GSE, we suggest making use of the cloud computing model. In this research a distributed architecture to support KM storage is proposed called conceptual framework of KM as a service in cloud. Framework presented is enhanced and conceptual framework of KM is embedded into that framework to store projects related knowledge for future use.Keywords: management, Globsl software development, global software engineering
Procedia PDF Downloads 526990 Preparation and Characterization of Recycled Polyethylene Terephthalate/Polypropylene Blends from Automotive Textile Waste for Use in the Furniture Edge Banding Sector
Authors: Merve Ozer, Tolga Gokkurt, Yasemen Gokkurt, Ezgi Bozbey
Abstract:
In this study, we investigated the recovery of Polyethylene terephthalate/Polypropylene (PET/PP)-containing automotive textile waste from post-product and post-consumer phases in the automotive sector according to the upcycling technique and the methods of formulation and production that would allow these wastes to be substituted as PP/PET alloys instead of original PP raw materials used in plastic edge band production. The laminated structure of the stated wastes makes it impossible to separate the incompatible PP and PET phases in content and thus produce a quality raw material or product as a result of recycling. Within the scope of a two-stage production process, a comprehensive process was examined using block copolymers and maleic grafted copolymers with different features to ensure that these two incompatible phases are compatible. The mechanical, thermal, and morphological properties of the plastic raw materials, which will be referred to as PP/PET blends obtained as a result of the process, were examined in detail and discussed their substitutability instead of the original raw materials.Keywords: mechanical recycling, melt blending, plastic blends, polyethylene, polypropylene, recycling of plastics, terephthalate, twin screw extruders
Procedia PDF Downloads 72989 CRLH and SRR Based Microwave Filter Design Useful for Communication Applications
Authors: Subal Kar, Amitesh Kumar, A. Majumder, S. K. Ghosh, S. Saha, S. S. Sikdar, T. K. Saha
Abstract:
CRLH (composite right/left-handed) based and SRR (split-ring resonator) based filters have been designed at microwave frequency which can provide better performance compared to conventional edge-coupled band-pass filter designed around the same frequency, 2.45 GHz. Both CRLH and SRR are unit cells used in metamaterial design. The primary aim of designing filters with such structures is to realize size reduction and also to realize novel filter performance. The CRLH based filter has been designed in microstrip transmission line, while the SRR based filter is designed with SRR loading in waveguide. The CRLH based filter designed at 2.45 GHz provides an insertion loss of 1.6 dB with harmonic suppression up to 10 GHz with 67 % size reduction when compared with a conventional edge-coupled band-pass filter designed around the same frequency. One dimensional (1-D) SRR matrix loaded in a waveguide shows the possibility of realizing a stop-band with sharp skirts in the pass-band while a stop-band in the pass-band of normal rectangular waveguide with tailoring of the dimensions of SRR unit cells. Such filters are expected to be very useful for communication systems at microwave frequency.Keywords: BPF, CRLH, harmonic, metamaterial, SRR and waveguide
Procedia PDF Downloads 427988 Experimental Study of the Fiber Dispersion of Pulp Liquid Flow in Channels with Application to Papermaking
Authors: Masaru Sumida
Abstract:
This study explored the feasibility of improving the hydraulic headbox of papermaking machines by studying the flow of wood-pulp suspensions behind a flat plate inserted in parallel and convergent channels. Pulp fiber concentrations of the wake downstream of the plate were investigated by flow visualization and optical measurements. Changes in the time-averaged and fluctuation of the fiber concentration along the flow direction were examined. In addition, the control of the flow characteristics in the two channels was investigated. The behaviors of the pulp fibers and the wake flow were found to be strongly related to the flow states in the upstream passages partitioned by the plate. The distribution of the fiber concentration was complex because of the formation of a thin water layer on the plate and the generation of Karman’s vortices at the trailing edge of the plate. Compared with the flow in the parallel channel, fluctuations in the fiber concentration decreased in the convergent channel. However, at low flow velocities, the convergent channel has a weak effect on equilibrating the time-averaged fiber concentration. This shows that a rectangular trailing edge cannot adequately disperse pulp suspensions; thus, at low flow velocities, a convergent channel is ineffective in ensuring uniform fiber concentration.Keywords: fiber dispersion, headbox, pulp liquid, wake flow
Procedia PDF Downloads 385987 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion
Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro
Abstract:
In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment
Procedia PDF Downloads 43986 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques
Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu
Abstract:
Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare
Procedia PDF Downloads 64985 Development and Power Characterization of an IoT Network for Agricultural Imaging Applications
Authors: Jacob Wahl, Jane Zhang
Abstract:
This paper describes the development and characterization of a prototype IoT network for use with agricultural imaging and monitoring applications. The sensor and gateway nodes are designed using the ESP32 SoC with integrated Bluetooth Low Energy 4.2 and Wi-Fi. A development board, the Arducam IoTai ESP32, is used for prototyping, testing, and power measurements. Google’s Firebase is used as the cloud storage site for image data collected by the sensor. The sensor node captures images using the OV2640 2MP camera module and transmits the image data to the gateway via Bluetooth Low Energy. The gateway then uploads the collected images to Firebase via a known nearby Wi-Fi network connection. This image data can then be processed and analyzed by computer vision and machine learning pipelines to assess crop growth or other needs. The sensor node achieves a wireless transmission data throughput of 220kbps while consuming 150mA of current; the sensor sleeps at 162µA. The sensor node device lifetime is estimated to be 682 days on a 6600mAh LiPo battery while acquiring five images per day based on the development board power measurements. This network can be utilized by any application that requires high data rates, low power consumption, short-range communication, and large amounts of data to be transmitted at low-frequency intervals.Keywords: Bluetooth low energy, ESP32, firebase cloud, IoT, smart farming
Procedia PDF Downloads 138984 Combination of Unmanned Aerial Vehicle and Terrestrial Laser Scanner Data for Citrus Yield Estimation
Authors: Mohammed Hmimou, Khalid Amediaz, Imane Sebari, Nabil Bounajma
Abstract:
Annual crop production is one of the most important macroeconomic indicators for the majority of countries around the world. This information is valuable, especially for exporting countries which need a yield estimation before harvest in order to correctly plan the supply chain. When it comes to estimating agricultural yield, especially for arboriculture, conventional methods are mostly applied. In the case of the citrus industry, the sale before harvest is largely practiced, which requires an estimation of the production when the fruit is on the tree. However, conventional method based on the sampling surveys of some trees within the field is always used to perform yield estimation, and the success of this process mainly depends on the expertise of the ‘estimator agent’. The present study aims to propose a methodology based on the combination of unmanned aerial vehicle (UAV) images and terrestrial laser scanner (TLS) point cloud to estimate citrus production. During data acquisition, a fixed wing and rotatory drones, as well as a terrestrial laser scanner, were tested. After that, a pre-processing step was performed in order to generate point cloud and digital surface model. At the processing stage, a machine vision workflow was implemented to extract points corresponding to fruits from the whole tree point cloud, cluster them into fruits, and model them geometrically in a 3D space. By linking the resulting geometric properties to the fruit weight, the yield can be estimated, and the statistical distribution of fruits size can be generated. This later property, which is information required by importing countries of citrus, cannot be estimated before harvest using the conventional method. Since terrestrial laser scanner is static, data gathering using this technology can be performed over only some trees. So, integration of drone data was thought in order to estimate the yield over a whole orchard. To achieve that, features derived from drone digital surface model were linked to yield estimation by laser scanner of some trees to build a regression model that predicts the yield of a tree given its features. Several missions were carried out to collect drone and laser scanner data within citrus orchards of different varieties by testing several data acquisition parameters (fly height, images overlap, fly mission plan). The accuracy of the obtained results by the proposed methodology in comparison to the yield estimation results by the conventional method varies from 65% to 94% depending mainly on the phenological stage of the studied citrus variety during the data acquisition mission. The proposed approach demonstrates its strong potential for early estimation of citrus production and the possibility of its extension to other fruit trees.Keywords: citrus, digital surface model, point cloud, terrestrial laser scanner, UAV, yield estimation, 3D modeling
Procedia PDF Downloads 142983 Image Segmentation Techniques: Review
Authors: Lindani Mbatha, Suvendi Rimer, Mpho Gololo
Abstract:
Image segmentation is the process of dividing an image into several sections, such as the object's background and the foreground. It is a critical technique in both image-processing tasks and computer vision. Most of the image segmentation algorithms have been developed for gray-scale images and little research and algorithms have been developed for the color images. Most image segmentation algorithms or techniques vary based on the input data and the application. Nearly all of the techniques are not suitable for noisy environments. Most of the work that has been done uses the Markov Random Field (MRF), which involves the computations and is said to be robust to noise. In the past recent years' image segmentation has been brought to tackle problems such as easy processing of an image, interpretation of the contents of an image, and easy analysing of an image. This article reviews and summarizes some of the image segmentation techniques and algorithms that have been developed in the past years. The techniques include neural networks (CNN), edge-based techniques, region growing, clustering, and thresholding techniques and so on. The advantages and disadvantages of medical ultrasound image segmentation techniques are also discussed. The article also addresses the applications and potential future developments that can be done around image segmentation. This review article concludes with the fact that no technique is perfectly suitable for the segmentation of all different types of images, but the use of hybrid techniques yields more accurate and efficient results.Keywords: clustering-based, convolution-network, edge-based, region-growing
Procedia PDF Downloads 95982 Obtaining High-Dimensional Configuration Space for Robotic Systems Operating in a Common Environment
Authors: U. Yerlikaya, R. T. Balkan
Abstract:
In this research, a method is developed to obtain high-dimensional configuration space for path planning problems. In typical cases, the path planning problems are solved directly in the 3-dimensional (D) workspace. However, this method is inefficient in handling the robots with various geometrical and mechanical restrictions. To overcome these difficulties, path planning may be formalized and solved in a new space which is called configuration space. The number of dimensions of the configuration space comes from the degree of freedoms of the system of interest. The method can be applied in two ways. In the first way, the point clouds of all the bodies of the system and interaction of them are used. The second way is performed via using the clearance function of simulation software where the minimum distances between surfaces of bodies are simultaneously measured. A double-turret system is held in the scope of this study. The 4-D configuration space of a double-turret system is obtained in these two ways. As a result, the difference between these two methods is around 1%, depending on the density of the point cloud. The disparity between the two forms steadily decreases as the point cloud density increases. At the end of the study, in order to verify 4-D configuration space obtained, 4-D path planning problem was realized as 2-D + 2-D and a sample path planning is carried out with using A* algorithm. Then, the accuracy of the configuration space is proved using the obtained paths on the simulation model of the double-turret system.Keywords: A* algorithm, autonomous turrets, high-dimensional C-space, manifold C-space, point clouds
Procedia PDF Downloads 139