Search results for: images ocean-saving technology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9600

Search results for: images ocean-saving technology

9000 Survey on Malware Detection

Authors: Doaa Wael, Naswa Abdelbaky

Abstract:

Malware is malicious software that is built to cause destructive actions and damage information systems and networks. Malware infections increase rapidly, and types of malware have become more sophisticated, which makes the malware detection process more difficult. On the other side, the Internet of Things IoT technology is vulnerable to malware attacks. These IoT devices are always connected to the internet and lack security. This makes them easy for hackers to access. These malware attacks are becoming the go-to attack for hackers. Thus, in order to deal with this challenge, new malware detection techniques are needed. Currently, building a blockchain solution that allows IoT devices to download any file from the internet and to verify/approve whether it is malicious or not is the need of the hour. In recent years, blockchain technology has stood as a solution to everything due to its features like decentralization, persistence, and anonymity. Moreover, using blockchain technology overcomes some difficulties in malware detection and improves the malware detection ratio over-than the techniques that do not utilize blockchain technology. In this paper, we study malware detection models which are based on blockchain technology. Furthermore, we elaborate on the effect of blockchain technology in malware detection, especially in the android environment.

Keywords: malware analysis, blockchain, malware attacks, malware detection approaches

Procedia PDF Downloads 61
8999 A Review of Research on Pre-training Technology for Natural Language Processing

Authors: Moquan Gong

Abstract:

In recent years, with the rapid development of deep learning, pre-training technology for natural language processing has made great progress. The early field of natural language processing has long used word vector methods such as Word2Vec to encode text. These word vector methods can also be regarded as static pre-training techniques. However, this context-free text representation brings very limited improvement to subsequent natural language processing tasks and cannot solve the problem of word polysemy. ELMo proposes a context-sensitive text representation method that can effectively handle polysemy problems. Since then, pre-training language models such as GPT and BERT have been proposed one after another. Among them, the BERT model has significantly improved its performance on many typical downstream tasks, greatly promoting the technological development in the field of natural language processing, and has since entered the field of natural language processing. The era of dynamic pre-training technology. Since then, a large number of pre-trained language models based on BERT and XLNet have continued to emerge, and pre-training technology has become an indispensable mainstream technology in the field of natural language processing. This article first gives an overview of pre-training technology and its development history, and introduces in detail the classic pre-training technology in the field of natural language processing, including early static pre-training technology and classic dynamic pre-training technology; and then briefly sorts out a series of enlightening technologies. Pre-training technology, including improved models based on BERT and XLNet; on this basis, analyze the problems faced by current pre-training technology research; finally, look forward to the future development trend of pre-training technology.

Keywords: natural language processing, pre-training, language model, word vectors

Procedia PDF Downloads 32
8998 Challenges Caused by the Integration of Technology as a Pedagogy in One of the Historically Disadvantaged Higher Education Institutions

Authors: Rachel Gugu Mkhasibe

Abstract:

Incorporation of technology as a pedagogy has many benefits. For instance, improvement of pedagogy, increased information access, increased cooperation, and collaboration. However, as good as it may be, this integration of technology as a pedagogy has not been widely adopted in most historically Black higher education institutions especially those in developing countries. For example, the socioeconomic background of students in historically black universities, the weak financial support available from these universities, as well as a large population of students struggle to access the recommended modern physical resources such as iPads, laptops, mobile phones, to name a few. This contributes to an increase in the increase of educational inequalities. The qualitative research approach was utilized in this work to gather detailed data about the obstacles created by the integration of technology as a pedagogy. Interviews were conducted to generate data from 20 academics from 10 Leve two students from one of the historically disadvantaged higher education Institutions in South Africa. The findings revealed that although both students and academics had overwhelming support of the integration of technology as a pedagogy in their institution, the environment which they found themselves in compromise the incorporation of technology as a pedagogy. Therefore, this paper recommends that Department of Higher Education and University Management should intervene and budget for technology to be provided in all the institutions of higher education regardless of where the institutions are situated.

Keywords: collaboration, integration, pedagogy, technology

Procedia PDF Downloads 62
8997 Localization of Mobile Robots with Omnidirectional Cameras

Authors: Tatsuya Kato, Masanobu Nagata, Hidetoshi Nakashima, Kazunori Matsuo

Abstract:

Localization of mobile robots are important tasks for developing autonomous mobile robots. This paper proposes a method to estimate positions of a mobile robot using an omnidirectional camera on the robot. Landmarks for points of references are set up on a field where the robot works. The omnidirectional camera which can obtain 360 [deg] around images takes photographs of these landmarks. The positions of the robots are estimated from directions of these landmarks that are extracted from the images by image processing. This method can obtain the robot positions without accumulative position errors. Accuracy of the estimated robot positions by the proposed method are evaluated through some experiments. The results show that it can obtain the positions with small standard deviations. Therefore the method has possibilities of more accurate localization by tuning of appropriate offset parameters.

Keywords: mobile robots, localization, omnidirectional camera, estimating positions

Procedia PDF Downloads 415
8996 Local Community Participation and the Adoption of Agricultural Technology in Kayunga District, Uganda

Authors: Barbara Kyampeire, Gerald Karyeijja

Abstract:

This study investigated the influence of local community participation on the adoption of new agricultural technology in Uganda, using the case study of Smooth Cayenne Pineapples in Kayunga District, Uganda. The mechanism of adoption of new technologies is often not fully understood and this prompted the study. The study adopted a descriptive, co relational, survey design. The researcher used questionnaire survey, focus group discussion as methods of data collection. A total of 152 respondents including adopters and non-adopters of new technology for producing pineapples were selected from 8 farmer groups in Kayunga District. The results indicated that the participation of the community in the planning, implementation and the monitoring and evaluation of the adoption of the new technology for producing pineapples was low thus reducing the adoption of the new technology in the District. The researcher concluded that community participation significantly influences the adoption of new agricultural technology by members of a particular community. The study thus recommended that: first, there is need for maximum involvement of members of the community in the planning, implementation and monitoring of any new agricultural technology; secondly, there is need for continued sharing of information about new agricultural technologies being introduced; and finally, community members must be equipped with Monitoring and Evaluation (M&E) skills in order to make them monitor the progress made by the new agricultural technologies.

Keywords: adoption, community, technology, implementation

Procedia PDF Downloads 398
8995 The Meaningful Pixel and Texture: Exploring Digital Vision and Art Practice Based on Chinese Cosmotechnics

Authors: Xingdu Wang, Charlie Gere, Emma Rose, Yuxuan Zhao

Abstract:

The study introduces a fresh perspective on the digital realm through an examination of the Chinese concept of Xiang, elucidating how it can build an understanding of pixels and textures on screens as digital trigrams. This concept attempts to offer an outlook on the intersection of digital technology and the natural world, thereby contributing to discussions about the harmonious relationship between humans and technology. The study looks for the ancient Chinese theory of Xiang as a key to establishing the theories and practices to respond to the problem of Contemporary Chinese technics. Xiang is a Chinese method of understanding the essentials of things through appearances, which differs from the method of science in the Westen. Xiang, the basement of Chinese visual art, is rooted in ancient Chinese philosophy and connected to the eight trigrams. The discussion of Xiang connects art, philosophy, and technology. This paper connects the meaning of Xiang with the 'truth appearing' philosophically through the analysis of the concepts of phenomenon and noumenon and the unique Chinese way of observing. Hereafter, the historical interconnection between ancient painting and writing in China emphasizes their relationship between technical craftsmanship and artistic expression. In digital, the paper blurs the traditional boundaries between images and text on digital screens in theory. Lastly, this study identified an ensemble concept relating to pixels and textures in computer vision, drawing inspiration from AI image recognition in Chinese paintings. In art practice, by presenting a fluid visual experience in the form of pixels, which mimics the flow of lines in traditional calligraphy and painting, it is hoped that the viewer will be brought back to the process of the truth appearing as defined by the 'Xiang’.

Keywords: Chinese cosmotechnics, computer vision, contemporary Neo-Confucianism, texture and pixel, Xiang

Procedia PDF Downloads 38
8994 Enhancement of Underwater Haze Image with Edge Reveal Using Pixel Normalization

Authors: M. Dhana Lakshmi, S. Sakthivel Murugan

Abstract:

As light passes from source to observer in the water medium, it is scattered by the suspended particulate matter. This scattering effect will plague the captured images with non-uniform illumination, blurring details, halo artefacts, weak edges, etc. To overcome this, pixel normalization with an Amended Unsharp Mask (AUM) filter is proposed to enhance the degraded image. To validate the robustness of the proposed technique irrespective of atmospheric light, the considered datasets are collected on dual locations. For those images, the maxima and minima pixel intensity value is computed and normalized; then the AUM filter is applied to strengthen the blurred edges. Finally, the enhanced image is obtained with good illumination and contrast. Thus, the proposed technique removes the effect of scattering called de-hazing and restores the perceptual information with enhanced edge detail. Both qualitative and quantitative analyses are done on considering the standard non-reference metric called underwater image sharpness measure (UISM), and underwater image quality measure (UIQM) is used to measure color, sharpness, and contrast for both of the location images. It is observed that the proposed technique has shown overwhelming performance compared to other deep-based enhancement networks and traditional techniques in an adaptive manner.

Keywords: underwater drone imagery, pixel normalization, thresholding, masking, unsharp mask filter

Procedia PDF Downloads 177
8993 Videoconference Technology: An Attractive Vehicle for Challenging and Changing Tutors Practice in Open and Distance Learning Environment

Authors: Ramorola Mmankoko Ziphorah

Abstract:

Videoconference technology represents a recent experiment of technology integration into teaching and learning in South Africa. Increasingly, videoconference technology is commonly used as a substitute for the traditional face-to-face approaches to teaching and learning in helping tutors to reshape and change their teaching practices. Interestingly, though, some studies point out that videoconference technology is commonly used for knowledge dissemination by tutors and not so much for the actual teaching of course content in Open and Distance Learning context. Though videoconference technology has become one of the dominating technologies available among Open and Distance Learning institutions, it is not clear that it has been used as effectively to bridge the learning distance in time, geography, and economy. While tutors are prepared theoretically, in most tutor preparation programs, on the use of videoconference technology, there are still no practical guidelines on how they should go about integrating this technology into their course teaching. Therefore, there is an urgent need to focus on tutor development, specifically on their capacities and skills to use videoconference technology. The assumption is that if tutors become competent in the use of the videoconference technology for course teaching, then their use in Open and Distance Learning environment will become more commonplace. This is the imperative of the 4th Industrial Revolution (4IR) on education generally. Against the current vacuum in the practice of using videoconference technology for course teaching, the current study proposes a qualitative phenomenological approach to investigate the efficacy of videoconferencing as an approach to student learning. Using interviews and observation data from ten participants in Open and Distance Learning institution, the author discusses how dialogue and structure interacted to provide the participating tutors with a rich set of opportunities to deliver course content. The findings to this study highlight various challenges experienced by tutors when using videoconference technology. The study suggests tutor development programs on their capacity and skills and on how to integrate this technology with various teaching strategies in order to enhance student learning. The author argues that it is not merely the existence of the structure, namely the videoconference technology, that provides the opportunity for effective teaching, but that is the interactions, namely, the dialogue amongst tutors and learners that make videoconference technology an attractive vehicle for challenging and changing tutors practice.

Keywords: open distance learning, transactional distance, tutor, videoconference

Procedia PDF Downloads 110
8992 The Detection of Implanted Radioactive Seeds on Ultrasound Images Using Convolution Neural Networks

Authors: Edward Holupka, John Rossman, Tye Morancy, Joseph Aronovitz, Irving Kaplan

Abstract:

A common modality for the treatment of early stage prostate cancer is the implantation of radioactive seeds directly into the prostate. The radioactive seeds are positioned inside the prostate to achieve optimal radiation dose coverage to the prostate. These radioactive seeds are positioned inside the prostate using Transrectal ultrasound imaging. Once all of the planned seeds have been implanted, two dimensional transaxial transrectal ultrasound images separated by 2 mm are obtained through out the prostate, beginning at the base of the prostate up to and including the apex. A common deep neural network, called DetectNet was trained to automatically determine the position of the implanted radioactive seeds within the prostate under ultrasound imaging. The results of the training using 950 training ultrasound images and 90 validation ultrasound images. The commonly used metrics for successful training were used to evaluate the efficacy and accuracy of the trained deep neural network and resulted in an loss_bbox (train) = 0.00, loss_coverage (train) = 1.89e-8, loss_bbox (validation) = 11.84, loss_coverage (validation) = 9.70, mAP (validation) = 66.87%, precision (validation) = 81.07%, and a recall (validation) = 82.29%, where train and validation refers to the training image set and validation refers to the validation training set. On the hardware platform used, the training expended 12.8 seconds per epoch. The network was trained for over 10,000 epochs. In addition, the seed locations as determined by the Deep Neural Network were compared to the seed locations as determined by a commercial software based on a one to three months after implant CT. The Deep Learning approach was within \strikeout off\uuline off\uwave off2.29\uuline default\uwave default mm of the seed locations determined by the commercial software. The Deep Learning approach to the determination of radioactive seed locations is robust, accurate, and fast and well within spatial agreement with the gold standard of CT determined seed coordinates.

Keywords: prostate, deep neural network, seed implant, ultrasound

Procedia PDF Downloads 175
8991 Deployment of Matrix Transpose in Digital Image Encryption

Authors: Okike Benjamin, Garba E J. D.

Abstract:

Encryption is used to conceal information from prying eyes. Presently, information and data encryption are common due to the volume of data and information in transit across the globe on daily basis. Image encryption is yet to receive the attention of the researchers as deserved. In other words, video and multimedia documents are exposed to unauthorized accessors. The authors propose image encryption using matrix transpose. An algorithm that would allow image encryption is developed. In this proposed image encryption technique, the image to be encrypted is split into parts based on the image size. Each part is encrypted separately using matrix transpose. The actual encryption is on the picture elements (pixel) that make up the image. After encrypting each part of the image, the positions of the encrypted images are swapped before transmission of the image can take place. Swapping the positions of the images is carried out to make the encrypted image more robust for any cryptanalyst to decrypt.

Keywords: image encryption, matrices, pixel, matrix transpose

Procedia PDF Downloads 400
8990 Deep Feature Augmentation with Generative Adversarial Networks for Class Imbalance Learning in Medical Images

Authors: Rongbo Shen, Jianhua Yao, Kezhou Yan, Kuan Tian, Cheng Jiang, Ke Zhou

Abstract:

This study proposes a generative adversarial networks (GAN) framework to perform synthetic sampling in feature space, i.e., feature augmentation, to address the class imbalance problem in medical image analysis. A feature extraction network is first trained to convert images into feature space. Then the GAN framework incorporates adversarial learning to train a feature generator for the minority class through playing a minimax game with a discriminator. The feature generator then generates features for minority class from arbitrary latent distributions to balance the data between the majority class and the minority class. Additionally, a data cleaning technique, i.e., Tomek link, is employed to clean up undesirable conflicting features introduced from the feature augmentation and thus establish well-defined class clusters for the training. The experiment section evaluates the proposed method on two medical image analysis tasks, i.e., mass classification on mammogram and cancer metastasis classification on histopathological images. Experimental results suggest that the proposed method obtains superior or comparable performance over the state-of-the-art counterparts. Compared to all counterparts, our proposed method improves more than 1.5 percentage of accuracy.

Keywords: class imbalance, synthetic sampling, feature augmentation, generative adversarial networks, data cleaning

Procedia PDF Downloads 110
8989 Mobile Systems: History, Technology, and Future

Authors: Shivendra Pratap Singh, Rishabh Sharma

Abstract:

The widespread adoption of mobile technology in recent years has revolutionized the way we communicate and access information. The evolution of mobile systems has been rapid and impactful, shaping our lives and changing the way we live and work. However, despite its significant influence, the history and development of mobile technology are not well understood by the general public. This research paper aims to examine the history, technology and future of mobile systems, exploring their evolution from early mobile phones to the latest smartphones and beyond. The study will analyze the technological advancements and innovations that have shaped the mobile industry, from the introduction of mobile internet and multimedia capabilities to the integration of artificial intelligence and 5G networks. Additionally, the paper will also address the challenges and opportunities facing the future of mobile technology, such as privacy concerns, battery life, and the increasing demand for high-speed internet. Finally, the paper will also provide insights into potential future developments and innovations in the mobile sector, such as foldable phones, wearable technology, and the Internet of Things (IoT). The purpose of this research paper is to provide a comprehensive overview of the history, technology, and future of mobile systems, shedding light on their impact on society and the challenges and opportunities that lie ahead.

Keywords: mobile technology, artificial intelligence, networking, iot, technological advancements, smartphones

Procedia PDF Downloads 73
8988 Sign Language Recognition of Static Gestures Using Kinect™ and Convolutional Neural Networks

Authors: Rohit Semwal, Shivam Arora, Saurav, Sangita Roy

Abstract:

This work proposes a supervised framework with deep convolutional neural networks (CNNs) for vision-based sign language recognition of static gestures. Our approach addresses the acquisition and segmentation of correct inputs for the CNN-based classifier. Microsoft Kinect™ sensor, despite complex environmental conditions, can track hands efficiently. Skin Colour based segmentation is applied on cropped images of hands in different poses, used to depict different sign language gestures. The segmented hand images are used as an input for our classifier. The CNN classifier proposed in the paper is able to classify the input images with a high degree of accuracy. The system was trained and tested on 39 static sign language gestures, including 26 letters of the alphabet and 13 commonly used words. This paper includes a problem definition for building the proposed system, which acts as a sign language translator between deaf/mute and the rest of the society. It is then followed by a focus on reviewing existing knowledge in the area and work done by other researchers. It also describes the working principles behind different components of CNNs in brief. The architecture and system design specifications of the proposed system are discussed in the subsequent sections of the paper to give the reader a clear picture of the system in terms of the capability required. The design then gives the top-level details of how the proposed system meets the requirements.

Keywords: sign language, CNN, HCI, segmentation

Procedia PDF Downloads 126
8987 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images

Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod

Abstract:

The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.

Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck

Procedia PDF Downloads 201
8986 Scintigraphic Image Coding of Region of Interest Based on SPIHT Algorithm Using Global Thresholding and Huffman Coding

Authors: A. Seddiki, M. Djebbouri, D. Guerchi

Abstract:

Medical imaging produces human body pictures in digital form. Since these imaging techniques produce prohibitive amounts of data, compression is necessary for storage and communication purposes. Many current compression schemes provide a very high compression rate but with considerable loss of quality. On the other hand, in some areas in medicine, it may be sufficient to maintain high image quality only in region of interest (ROI). This paper discusses a contribution to the lossless compression in the region of interest of Scintigraphic images based on SPIHT algorithm and global transform thresholding using Huffman coding.

Keywords: global thresholding transform, huffman coding, region of interest, SPIHT coding, scintigraphic images

Procedia PDF Downloads 344
8985 Important Management Competencies: University of Technology Perspective

Authors: Courtley Pharaoh, D. J. Visser

Abstract:

University management is often caught between competing interests from stakeholders like students, trustees, donors, government and the community it serves. This study aimed to identify what management competencies are required by executive management members of universities of technology to effectively manage a university of technology in South Africa from the perspective of the executive management members. This exploratory study will make use of a qualitative methodology to establish what management competencies are deemed as important to manage a university of technology in South Africa from the executive management perspective. Due to the consequences of the COVID-19 Pandemic, the study made use of online face-to-face interviews to ascertain from executive management members of universities of technology what the required management competencies needed by executive management members of universities of technology to effectively manage a University of Technology in South Africa. Qualitative Content Analysis was used to analyse the data collected. The findings of the study identified a total of 26 management competencies which were categorised into three groupings or themes. This study identified a list of required management competencies needed by executive management members of universities of technology to effectively manage a university of technology in South Africa, as per the lived experience of executive management members. The researcher recommends further studies at traditional and comprehensive universities and compares the results of those future studies with the results of this study. A comprehensive list of management competencies could then be identified, which could assist with the compilation of job descriptions of executive management members of universities in South Africa.

Keywords: university of technology, management competencies, executive management, executive management members, important

Procedia PDF Downloads 69
8984 Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model

Authors: Shaira L. Kee, Michael Aaron G. Sy, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar AlDahoul

Abstract:

Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16.

Keywords: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma

Procedia PDF Downloads 63
8983 Design and Simulation on Technology Capabilities in Developing countries, Design and Engineering Approach

Authors: S. Abedi, M. R. Soroush, M. Mousakhani

Abstract:

According to studies in the field of technology capabilities we identify the most important indicators to evaluate the level of "Design and Engineering" capabilities. Since the technology development correlates with the level of technology capabilities trying to promote its key importance. In this research by using FDM, the right combination of D&E capabilities indicators according to the auto industry is presented. Finally, with modeling evaluation of D&E capabilities by using FIS and check its reliability, five levels were determined to evaluate the D&E capabilities. We have analyzed 80 companies in auto industry and determined D&E capabilities of each level. Field of company activity indicators has been divided into four categories, Suspension group, Electrical group, Engine groups and trims group. The results show that half of the surveyed companies had D&E capabilities in Level 1 and 2 or in other words very low and low level of D&E.

Keywords: developing countries, D&E capabilities, technology capabilities, auto industry

Procedia PDF Downloads 521
8982 New Approaches for the Handwritten Digit Image Features Extraction for Recognition

Authors: U. Ravi Babu, Mohd Mastan

Abstract:

The present paper proposes a novel approach for handwritten digit recognition system. The present paper extract digit image features based on distance measure and derives an algorithm to classify the digit images. The distance measure can be performing on the thinned image. Thinning is the one of the preprocessing technique in image processing. The present paper mainly concentrated on an extraction of features from digit image for effective recognition of the numeral. To find the effectiveness of the proposed method tested on MNIST database, CENPARMI, CEDAR, and newly collected data. The proposed method is implemented on more than one lakh digit images and it gets good comparative recognition results. The percentage of the recognition is achieved about 97.32%.

Keywords: handwritten digit recognition, distance measure, MNIST database, image features

Procedia PDF Downloads 440
8981 i2kit: A Tool for Immutable Infrastructure Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.

Keywords: container, deployment, immutable infrastructure, microservice

Procedia PDF Downloads 160
8980 Using the Technological, Pedagogical, and Content Knowledge (TPACK) Model to Address College Instructors Weaknesses in Integration of Technology in Their Current Area Curricula

Authors: Junior George Martin

Abstract:

The purpose of this study was to explore college instructors’ integration of technology in their content area curriculum. The instructors indicated that they were in need of additional training to successfully integrate technology in their subject areas. The findings point to the implementation of a proposed the Technological, Pedagogical, and Content Knowledge (TPACK) model professional development workshop to satisfactorily address the weaknesses of the instructors in technology integration. The professional development workshop is proposed as a rational solution to adequately address the instructors’ inability to the successful integration of technology in their subject area in an effort to improve their pedagogy. The intense workshop would last for 5 days and will be designed to provide instructors with training in areas such as a use of technology applications and tools, and using modern methodologies to improve technology integration. Exposing the instructors to the specific areas identified will address the weaknesses they demonstrated during the study. Professional development is deemed the most appropriate intervention based on the opportunities it provides the instructors to access hands-on training to overcome their weaknesses. The purpose of the TPACK professional development workshop will be to improve the competence of the instructors so that they are adequately prepared to integrate technology successfully in their curricula. At the end of the period training, the instructors are expected to adopt strategies that will have a positive impact on the learning experiences of the students.

Keywords: higher education, modern technology tools, professional development, technology integration

Procedia PDF Downloads 294
8979 The Role of Information Technology in the Supply Chain Management

Authors: Azar Alizadeh, Mohammad Reza Naserkhaki

Abstract:

The application of the IT systems for collecting and analyzing the data can have a significant effect on the performance of any company. In recent decade, different advancements and achievements in the field of information technology have changed the industry compared to the previous decade. The adoption and application of the information technology are one of the ways to achieve a distinctive competitive personality to the companies and their supply chain. The acceptance of the IT and its proper implementation cam reinforce and improve the cooperation between different parts of the supply chain by rapid transfer and distribution of the precise information and the application of the informational systems, leading to the increase in the supply chain efficiency. The main objective of this research is to study the effects and applications of the information technology on and in the supply chain management and to introduce the effective factors on the acceptance of information technology in the companies. Moreover, in order to understand the subject, we will investigate the role and importance of the information and electronic commerce in the supply chain and the characteristics of the supply chain based on the information flow approach.

Keywords: electronic commerce, industry, information technology, management, supply chain, system

Procedia PDF Downloads 464
8978 Improving Temporal Correlations in Empirical Orthogonal Function Expansions for Data Interpolating Empirical Orthogonal Function Algorithm

Authors: Ping Bo, Meng Yunshan

Abstract:

Satellite-derived sea surface temperature (SST) is a key parameter for many operational and scientific applications. However, the disadvantage of SST data is a high percentage of missing data which is mainly caused by cloud coverage. Data Interpolating Empirical Orthogonal Function (DINEOF) algorithm is an EOF-based technique for reconstructing the missing data and has been widely used in oceanographic field. The reconstruction of SST images within a long time series using DINEOF can cause large discontinuities and one solution for this problem is to filter the temporal covariance matrix to reduce the spurious variability. Based on the previous researches, an algorithm is presented in this paper to improve the temporal correlations in EOF expansion. Similar with the previous researches, a filter, such as Laplacian filter, is implemented on the temporal covariance matrix, but the temporal relationship between two consecutive images which is used in the filter is considered in the presented algorithm, for example, two images in the same season are more likely correlated than those in the different seasons, hence the latter one is less weighted in the filter. The presented approach is tested for the monthly nighttime 4-km Advanced Very High Resolution Radiometer (AVHRR) Pathfinder SST for the long-term period spanning from 1989 to 2006. The results obtained from the presented algorithm are compared to those from the original DINEOF algorithm without filtering and from the DINEOF algorithm with filtering but without taking temporal relationship into account.

Keywords: data interpolating empirical orthogonal function, image reconstruction, sea surface temperature, temporal filter

Procedia PDF Downloads 301
8977 TACTICAL: Ram Image Retrieval in Linux Using Protected Mode Architecture’s Paging Technique

Authors: Sedat Aktas, Egemen Ulusoy, Remzi Yildirim

Abstract:

This article explains how to get a ram image from a computer with a Linux operating system and what steps should be followed while getting it. What we mean by taking a ram image is the process of dumping the physical memory instantly and writing it to a file. This process can be likened to taking a picture of everything in the computer’s memory at that moment. This process is very important for tools that analyze ram images. Volatility can be given as an example because before these tools can analyze ram, images must be taken. These tools are used extensively in the forensic world. Forensic, on the other hand, is a set of processes for digitally examining the information on any computer or server on behalf of official authorities. In this article, the protected mode architecture in the Linux operating system is examined, and the way to save the image sample of the kernel driver and system memory to disk is followed. Tables and access methods to be used in the operating system are examined based on the basic architecture of the operating system, and the most appropriate methods and application methods are transferred to the article. Since there is no article directly related to this study on Linux in the literature, it is aimed to contribute to the literature with this study on obtaining ram images. LIME can be mentioned as a similar tool, but there is no explanation about the memory dumping method of this tool. Considering the frequency of use of these tools, the contribution of the study in the field of forensic medicine has been the main motivation of the study due to the intense studies on ram image in the field of forensics.

Keywords: linux, paging, addressing, ram-image, memory dumping, kernel modules, forensic

Procedia PDF Downloads 86
8976 The Use of Remotely Sensed Data to Extract Wetlands Area in the Cultural Park of Ahaggar, South of Algeria

Authors: Y. Fekir, K. Mederbal, M. A. Hammadouche, D. Anteur

Abstract:

The cultural park of the Ahaggar, occupying a large area of Algeria, is characterized by a rich wetlands area to be preserved and managed both in time and space. The management of a large area, by its complexity, needs large amounts of data, which for the most part, are spatially localized (DEM, satellite images and socio-economic information...), where the use of conventional and traditional methods is quite difficult. The remote sensing, by its efficiency in environmental applications, became an indispensable solution for this kind of studies. Remote sensing imaging data have been very useful in the last decade in very interesting applications. They can aid in several domains such as the detection and identification of diverse wetland surface targets, topographical details, and geological features... In this work, we try to extract automatically wetlands area using multispectral remotely sensed data on-board the Earth Observing 1 (EO-1) and Landsat satellite. Both are high-resolution multispectral imager with a 30 m resolution. The instrument images an interesting surface area. We have used images acquired over the several area of interesting in the National Park of Ahaggar in the south of Algeria. An Extraction Algorithm is applied on the several spectral index obtained from combination of different spectral bands to extract wetlands fraction occupation of land use. The obtained results show an accuracy to distinguish wetlands area from the other lad use themes using a fine exploitation on spectral index.

Keywords: multispectral data, EO1, landsat, wetlands, Ahaggar, Algeria

Procedia PDF Downloads 360
8975 Legal Aspects in Character Merchandising with Reference to Right to Image of Celebrities

Authors: W. R. M. Shehani Shanika

Abstract:

Selling goods and services using images, names and personalities of celebrities has become a common marketing strategy identified in modern physical and online markets. Two concepts called globalization and open economy have given numerous reasons to develop businesses to earn higher profits. Therefore, global market plus domestic markets in various countries have vigorously endorsing images of famous sport stars, film stars, singing stars and cartoon characters for the purpose of increasing demand for goods and services rendered by them. It has been evident that these trade strategies have become a threat to famous personalities in financially and personally. Right to the image is a basic human right which celebrities owned to avoid themselves from various commercial exploitations. In this respect, this paper aims to assess whether the law relating to character merchandising satisfactorily protects right to image of celebrities. However, celebrities can decide how much they receive for each representation to the general public. Simply they have exclusive right to decide monetary value for their image. But most commonly every country uses law relating to unfair competition to regulate matters arise thereof. Legal norms in unfair competition are not enough to protect image of celebrities. Therefore, celebrities must be able to avoid unauthorized use of their images for commercial purposes by fraudulent traders and getting unjustly enriched, as their images have economic value. They have the right for use their image for any commercial purpose and earn profits. Therefore it is high time to recognize right to image as a new dimension to be protected in the legal framework of character merchandising. Unfortunately, to the author’s best knowledge there are no any uniform, single international standard which recognizes right to the image of celebrities in the context of character merchandising. The paper identifies it as a controversial legal barrier faced by celebrities in the rapidly evolving marketplace. Finally, this library-based research concludes with proposals to ensure the right to image more broadly in the legal context of character merchandising.

Keywords: brand endorsement, celebrity, character merchandising, intellectual property rights, right to image, unfair competition

Procedia PDF Downloads 123
8974 Detection of Safety Goggles on Humans in Industrial Environment Using Faster-Region Based on Convolutional Neural Network with Rotated Bounding Box

Authors: Ankit Kamboj, Shikha Talwar, Nilesh Powar

Abstract:

To successfully deliver our products in the market, the employees need to be in a safe environment, especially in an industrial and manufacturing environment. The consequences of delinquency in wearing safety glasses while working in industrial plants could be high risk to employees, hence the need to develop a real-time automatic detection system which detects the persons (violators) not wearing safety glasses. In this study a convolutional neural network (CNN) algorithm called faster region based CNN (Faster RCNN) with rotated bounding box has been used for detecting safety glasses on persons; the algorithm has an advantage of detecting safety glasses with different orientation angles on the persons. The proposed method of rotational bounding boxes with a convolutional neural network first detects a person from the images, and then the method detects whether the person is wearing safety glasses or not. The video data is captured at the entrance of restricted zones of the industrial environment (manufacturing plant), which is further converted into images at 2 frames per second. In the first step, the CNN with pre-trained weights on COCO dataset is used for person detection where the detections are cropped as images. Then the safety goggles are labelled on the cropped images using the image labelling tool called roLabelImg, which is used to annotate the ground truth values of rotated objects more accurately, and the annotations obtained are further modified to depict four coordinates of the rectangular bounding box. Next, the faster RCNN with rotated bounding box is used to detect safety goggles, which is then compared with traditional bounding box faster RCNN in terms of detection accuracy (average precision), which shows the effectiveness of the proposed method for detection of rotatory objects. The deep learning benchmarking is done on a Dell workstation with a 16GB Nvidia GPU.

Keywords: CNN, deep learning, faster RCNN, roLabelImg rotated bounding box, safety goggle detection

Procedia PDF Downloads 117
8973 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation

Authors: Jonathan Gong

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning

Procedia PDF Downloads 109
8972 Seashore Debris Detection System Using Deep Learning and Histogram of Gradients-Extractor Based Instance Segmentation Model

Authors: Anshika Kankane, Dongshik Kang

Abstract:

Marine debris has a significant influence on coastal environments, damaging biodiversity, and causing loss and damage to marine and ocean sector. A functional cost-effective and automatic approach has been used to look up at this problem. Computer vision combined with a deep learning-based model is being proposed to identify and categorize marine debris of seven kinds on different beach locations of Japan. This research compares state-of-the-art deep learning models with a suggested model architecture that is utilized as a feature extractor for debris categorization. The model is being proposed to detect seven categories of litter using a manually constructed debris dataset, with the help of Mask R-CNN for instance segmentation and a shape matching network called HOGShape, which can then be cleaned on time by clean-up organizations using warning notifications of the system. The manually constructed dataset for this system is created by annotating the images taken by fixed KaKaXi camera using CVAT annotation tool with seven kinds of category labels. A pre-trained HOG feature extractor on LIBSVM is being used along with multiple templates matching on HOG maps of images and HOG maps of templates to improve the predicted masked images obtained via Mask R-CNN training. This system intends to timely alert the cleanup organizations with the warning notifications using live recorded beach debris data. The suggested network results in the improvement of misclassified debris masks of debris objects with different illuminations, shapes, viewpoints and litter with occlusions which have vague visibility.

Keywords: computer vision, debris, deep learning, fixed live camera images, histogram of gradients feature extractor, instance segmentation, manually annotated dataset, multiple template matching

Procedia PDF Downloads 80
8971 Local Government Digital Attention and Green Technology Innovation: Analysis Based on Spatial Durbin Model

Authors: Xin Wang, Chaoqun Ma, Zheng Yao

Abstract:

Although green technology innovation faces new opportunities and challenges in the digital era, its theoretical research remains limited. Drawing on the attention-based view, this study employs the spatial Durbin model to investigate the impact of local government digital attention and digital industrial agglomeration on green technology innovation across 30 Chinese provinces from 2011 to 2021, as well as the spatial spillover effects present. The results suggest that both government digital attention and digital industrial agglomeration positively influence green technology innovation in local and neighboring provinces, with digital industrial agglomeration exhibiting a positive moderating effect on this direct local and indirect spatial spillover relationship. The findings of this study provide a new theoretical perspective for green technology innovation research and hold valuable implications for the advancement of the attention-based view and green technology innovation.

Keywords: local government digital attention, digital industrial agglomeration, green technology innovation, attention-based view

Procedia PDF Downloads 53