Search results for: time workflow network
19915 Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network
Authors: P. Karthick, K. Mahesh
Abstract:
Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video.Keywords: video compression, K-means clustering, convolutional neural network, generative adversarial network, singular value decomposition, pixel visualization, stochastic gradient descent, frame per second extraction, RGB channel extraction, self-detection and deciding system
Procedia PDF Downloads 18719914 Heroin Withdrawal, Prison and Multiple Temporalities
Authors: Ian Walmsley
Abstract:
The aim of this paper is to explore the influence of time and temporality on the experience of coming off heroin in prison. The presentation draws on qualitative data collected during a small-scale pilot study of the role of self-care in the process of coming off drugs in prison. Time and temporality emerged as a key theme in the interview transcripts. Drug dependent prisoners experience of time in prison has not been recognized in the research literature. Instead, the literature on prison time typically views prisoners as a homogenous group or tends to focus on the influence of aging and gender on prison time. Furthermore, there is a tendency in the literature on prison drug treatment and recovery to conceptualize drug dependent prisoners as passive recipients of prison healthcare, rather than active agents. In building on these gaps, this paper argues that drug dependent prisoners experience multiple temporalities which involve an interaction between the body-times of the drug dependent prisoner and the economy of time in prison. One consequence of this interaction is the feeling that they are doing, at this point in their prison sentence, double prison time. The second part of the argument is that time and temporality were a means through which they governed their withdrawing bodies. In addition, this paper will comment on the challenges of prison research in England.Keywords: heroin withdrawal, time and temporality, prison, body
Procedia PDF Downloads 27619913 High Resolution Image Generation Algorithm for Archaeology Drawings
Authors: Xiaolin Zeng, Lei Cheng, Zhirong Li, Xueping Liu
Abstract:
Aiming at the problem of low accuracy and susceptibility to cultural relic diseases in the generation of high-resolution archaeology drawings by current image generation algorithms, an archaeology drawings generation algorithm based on a conditional generative adversarial network is proposed. An attention mechanism is added into the high-resolution image generation network as the backbone network, which enhances the line feature extraction capability and improves the accuracy of line drawing generation. A dual-branch parallel architecture consisting of two backbone networks is implemented, where the semantic translation branch extracts semantic features from orthophotographs of cultural relics, and the gradient screening branch extracts effective gradient features. Finally, the fusion fine-tuning module combines these two types of features to achieve the generation of high-quality and high-resolution archaeology drawings. Experimental results on the self-constructed archaeology drawings dataset of grotto temple statues show that the proposed algorithm outperforms current mainstream image generation algorithms in terms of pixel accuracy (PA), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR) and can be used to assist in drawing archaeology drawings.Keywords: archaeology drawings, digital heritage, image generation, deep learning
Procedia PDF Downloads 5819912 Participatory Planning of the III Young Sea Meeting: An Experience of the Young Albatroz Collective
Authors: Victor V. Ribeiro, Thais C. Lopes, Rafael A. A. Monteiro
Abstract:
The Albatroz, Baleia Jubarte, Coral Vivo, Golfinho Rotador and Tamar projects make up the Young Sea Network (YSN), part of the BIOMAR Network, which aims to integrate the environmental youths of the Brazilian coast. For this, three editions of the Young Sea Meeting (YSM) were performed. Seeking to stimulate belonging, self-knowledge, participation, autonomy and youth protagonism, the Albatroz Project hosted the III YSM, in Bertioga (SP), in April 2019 and aimed to collectively plan the meeting. Five pillars of Environmental Education were used: identity, community, dialogue, power to act and happiness, the OCA Method and the Young Educates Young; Young Chooses Young; and One Generation Learns from the Other principals. In December 2018, still in the II YSM, the participatory planning of the III YSM began. Two "representatives" of each group were voluntarily elected to facilitate joint decisions, propose, receive and communicate demands from their groups and coordinators. The Young Albatroz Collective (YAC) facilitated the organization process as a whole. The purpose of the meeting was collectively constructed, answering the following question: "What is the YSM for?". Only two of the five pairs of representatives responded. There was difficulty gathering the young people in each group, because it was the end of the year, with people traveling. Thus, due to the short planning time, the YAC built a pre-programming to be validated by the other groups, defining as the objective of the meeting the strengthening of youth protagonism within the YSN. In the planning process, the YAC held 20 meetings, with 60 hours of face-to-face work, in three months, and two technical visits to the headquarters of the III YSM. The participatory dynamics of consultation, when it occurred, required up to two weeks, evidencing the limits of participation. The project coordinations stated that they were not being included in the process by their young people. There is a need to work more to be able to aloud the participation, developing skills and understanding about its principles. This training must take place in an articulated way between the network, implying the important role of the five projects in jointly developing and implementing educator processes with this objective in a national dimension, but without forgetting the specificities of each young group. Finally, it is worth highlighting the great potential of the III YSM by stimulating the exercise of leading environmental youth in more than 50 young people from Brazilian coast, linked to the YSN, stimulating the learning and mobilization of young people in favor of coastal and marine conservation.Keywords: Marine Conservation, Environmental Education, Youth, Participation, Planning
Procedia PDF Downloads 16619911 Optimized Scheduling of Domestic Load Based on User Defined Constraints in a Real-Time Tariff Scenario
Authors: Madia Safdar, G. Amjad Hussain, Mashhood Ahmad
Abstract:
One of the major challenges of today’s era is peak demand which causes stress on the transmission lines and also raises the cost of energy generation and ultimately higher electricity bills to the end users, and it was used to be managed by the supply side management. However, nowadays this has been withdrawn because of existence of potential in the demand side management (DSM) having its economic and- environmental advantages. DSM in domestic load can play a vital role in reducing the peak load demand on the network provides a significant cost saving. In this paper the potential of demand response (DR) in reducing the peak load demands and electricity bills to the electric users is elaborated. For this purpose the domestic appliances are modeled in MATLAB Simulink and controlled by a module called energy management controller. The devices are categorized into controllable and uncontrollable loads and are operated according to real-time tariff pricing pattern instead of fixed time pricing or variable pricing. Energy management controller decides the switching instants of the controllable appliances based on the results from optimization algorithms. In GAMS software, the MILP (mixed integer linear programming) algorithm is used for optimization. In different cases, different constraints are used for optimization, considering the comforts, needs and priorities of the end users. Results are compared and the savings in electricity bills are discussed in this paper considering real time pricing and fixed tariff pricing, which exhibits the existence of potential to reduce electricity bills and peak loads in demand side management. It is seen that using real time pricing tariff instead of fixed tariff pricing helps to save in the electricity bills. Moreover the simulation results of the proposed energy management system show that the gained power savings lie in high range. It is anticipated that the result of this research will prove to be highly effective to the utility companies as well as in the improvement of domestic DR.Keywords: controllable and uncontrollable domestic loads, demand response, demand side management, optimization, MILP (mixed integer linear programming)
Procedia PDF Downloads 30219910 Green Wave Control Strategy for Optimal Energy Consumption by Model Predictive Control in Electric Vehicles
Authors: Furkan Ozkan, M. Selcuk Arslan, Hatice Mercan
Abstract:
Electric vehicles are becoming increasingly popular asa sustainable alternative to traditional combustion engine vehicles. However, to fully realize the potential of EVs in reducing environmental impact and energy consumption, efficient control strategies are essential. This study explores the application of green wave control using model predictive control for electric vehicles, coupled with energy consumption modeling using neural networks. The use of MPC allows for real-time optimization of the vehicles’ energy consumption while considering dynamic traffic conditions. By leveraging neural networks for energy consumption modeling, the EV's performance can be further enhanced through accurate predictions and adaptive control. The integration of these advanced control and modeling techniques aims to maximize energy efficiency and range while navigating urban traffic scenarios. The findings of this research offer valuable insights into the potential of green wave control for electric vehicles and demonstrate the significance of integrating MPC and neural network modeling for optimizing energy consumption. This work contributes to the advancement of sustainable transportation systems and the widespread adoption of electric vehicles. To evaluate the effectiveness of the green wave control strategy in real-world urban environments, extensive simulations were conducted using a high-fidelity vehicle model and realistic traffic scenarios. The results indicate that the integration of model predictive control and energy consumption modeling with neural networks had a significant impact on the energy efficiency and range of electric vehicles. Through the use of MPC, the electric vehicle was able to adapt its speed and acceleration profile in realtime to optimize energy consumption while maintaining travel time objectives. The neural network-based energy consumption modeling provided accurate predictions, enabling the vehicle to anticipate and respond to variations in traffic flow, further enhancing energy efficiency and range. Furthermore, the study revealed that the green wave control strategy not only reduced energy consumption but also improved the overall driving experience by minimizing abrupt acceleration and deceleration, leading to a smoother and more comfortable ride for passengers. These results demonstrate the potential for green wave control to revolutionize urban transportation by enhancing the performance of electric vehicles and contributing to a more sustainable and efficient mobility ecosystem.Keywords: electric vehicles, energy efficiency, green wave control, model predictive control, neural networks
Procedia PDF Downloads 5419909 Enhancing Internet of Things Security: A Blockchain-Based Approach for Preventing Spoofing Attacks
Authors: Salha Abdullah Ali Al-Shamrani, Maha Muhammad Dhaher Aljuhani, Eman Ali Ahmed Aldhaheri
Abstract:
With the proliferation of Internet of Things (IoT) devices in various industries, there has been a concurrent rise in security vulnerabilities, particularly spoofing attacks. This study explores the potential of blockchain technology in enhancing the security of IoT systems and mitigating these attacks. Blockchain's decentralized and immutable ledger offers significant promise for improving data integrity, transaction transparency, and tamper-proofing. This research develops and implements a blockchain-based IoT architecture and a reference network to simulate real-world scenarios and evaluate a blockchain-integrated intrusion detection system. Performance measures including time delay, security, and resource utilization are used to assess the system's effectiveness, comparing it to conventional IoT networks without blockchain. The results provide valuable insights into the practicality and efficacy of employing blockchain as a security mechanism, shedding light on the trade-offs between speed and security in blockchain deployment for IoT. The study concludes that despite minor increases in time consumption, the security benefits of incorporating blockchain technology into IoT systems outweigh potential drawbacks, demonstrating a significant potential for blockchain in bolstering IoT security.Keywords: internet of things, spoofing, IoT, access control, blockchain, raspberry pi
Procedia PDF Downloads 7419908 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning
Authors: Akeel A. Shah, Tong Zhang
Abstract:
Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning
Procedia PDF Downloads 4019907 The Diurnal and Seasonal Relationships of Pedestrian Injuries Secondary to Motor Vehicles in Young People
Authors: Amina Akhtar, Rory O'Connor
Abstract:
Introduction: There remains significant morbidity and mortality in young pedestrians hit by motor vehicles, even in the era of pedestrian crossings and speed limits. The aim of this study was to compare incidence and injury severity of motor vehicle-related pedestrian trauma according to time of day and season in a young population, based on the supposition that injuries would be more prevalent during dusk and dawn and during autumn and winter. Methods: Data was retrieved for patients between 10-25 years old from the National Trauma Audit and Research Network (TARN) database who had been involved as pedestrians in motor vehicle accidents between 2015-2020. The incidence of injuries, their severity (using the Injury Severity Score [ISS]), hospital transfer time, and mortality were analysed according to the hours of daylight, darkness, and season. Results: The study identified a seasonal pattern, showing that autumn was the predominant season and led to 34.9% of injuries, with a further 25.4% in winter in comparison to spring and summer, with 21.4% and 18.3% of injuries, respectively. However, visibility alone was not a sufficient factor as 49.5% of injuries occurred during the time of darkness, while 50.5% occurred during daylight. Importantly, the greatest injury rate (number of injuries/hour) occurred between 1500-1630, correlating to school pick-up times. A further significant relationship between injury severity score (ISS) and daylight was demonstrated (p-value= 0.0124), with moderate injuries (ISS 9-14) occurring most commonly during the day (72.7%) and more severe injuries (ISS>15) occurred during the night (55.8%). Conclusion: We have identified a relationship between time of day and the frequency and severity of pedestrian trauma in young people. In addition, particular time groupings correspond to the greatest injury rate, suggesting that reduced visibility coupled with school pick-up times may play a significant role. This could be addressed through a targeted public health approach to implementing change. We recommend targeted public health measures to improve road safety that focus on these times and that increase the visibility of children combined with education for drivers.Keywords: major trauma, paediatric trauma, road traffic accidents, diurnal pattern
Procedia PDF Downloads 10119906 Using Data from Foursquare Web Service to Represent the Commercial Activity of a City
Authors: Taras Agryzkov, Almudena Nolasco-Cirugeda, Jose L. Oliver, Leticia Serrano-Estrada, Leandro Tortosa, Jose F. Vicent
Abstract:
This paper aims to represent the commercial activity of a city taking as source data the social network Foursquare. The city of Murcia is selected as case study, and the location-based social network Foursquare is the main source of information. After carrying out a reorganisation of the user-generated data extracted from Foursquare, it is possible to graphically display on a map the various city spaces and venues –especially those related to commercial, food and entertainment sector businesses. The obtained visualisation provides information about activity patterns in the city of Murcia according to the people`s interests and preferences and, moreover, interesting facts about certain characteristics of the town itself.Keywords: social networks, spatial analysis, data visualization, geocomputation, Foursquare
Procedia PDF Downloads 42619905 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis
Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante
Abstract:
The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.Keywords: dynamic analysis, long short-term memory, prediction, sepsis
Procedia PDF Downloads 12519904 Development of Deep Neural Network-Based Strain Values Prediction Models for Full-Scale Reinforced Concrete Frames Using Highly Flexible Sensing Sheets
Authors: Hui Zhang, Sherif Beskhyroun
Abstract:
Structural Health monitoring systems (SHM) are commonly used to identify and assess structural damage. In terms of damage detection, SHM needs to periodically collect data from sensors placed in the structure as damage-sensitive features. This includes abnormal changes caused by the strain field and abnormal symptoms of the structure, such as damage and deterioration. Currently, deploying sensors on a large scale in a building structure is a challenge. In this study, a highly stretchable strain sensors are used in this study to collect data sets of strain generated on the surface of full-size reinforced concrete (RC) frames under extreme cyclic load application. This sensing sheet can be switched freely between the test bending strain and the axial strain to achieve two different configurations. On this basis, the deep neural network prediction model of the frame beam and frame column is established. The training results show that the method can accurately predict the strain value and has good generalization ability. The two deep neural network prediction models will also be deployed in the SHM system in the future as part of the intelligent strain sensor system.Keywords: strain sensing sheets, deep neural networks, strain measurement, SHM system, RC frames
Procedia PDF Downloads 9919903 Authentic Connection between the Deity and the Individual Human Being Is Vital for Psychological, Biological, and Social Health
Authors: Sukran Karatas
Abstract:
Authentic energy network interrelations between the Creator and the creations as well as from creations to creations are the most important points for the worlds of physics and metaphysic to unite together and work in harmony, both within human beings, on the other hand, have the ability to choose their own life style voluntarily. However, it includes the automated involuntary spirit, soul and body working systems together with the voluntary actions, which involve personal, cultural and universal, rational or irrational variable values. Therefore, it is necessary for human beings to know the methods of existing authentic energy network connections to be able to communicate correlate and accommodate the physical and metaphysical entities as a proper functioning unity; this is essential for complete human psychological, biological and social well-being. Authentic knowledge is necessary for human beings to verify the position of self within self and with others to regulate conscious and voluntary actions accordingly in order to prevent oppressions and frictions within self and between self and others. Unfortunately, the absence of genuine individual and universal basic knowledge about how to establish an authentic energy network connection within self, with the deity and the environment is the most problematic issue even in the twenty-first century. The second most problematic issue is how to maintain freedom, equality and justice among human beings during these strictly interwoven network connections, which naturally involve physical, metaphysical and behavioral actions of the self and the others. The third and probably the most complicated problem is the scientific identification and the authentication of the deity. This not only provides the whole power and control over the choosers to set their life orders but also to establish perfect physical and metaphysical links as fully coordinated functional energy network. This thus indicates that choosing an authentic deity is the key-point that influences automated, emotional, and behavioral actions altogether, which shapes human perception, personal actions, and life orders. Therefore, we will be considering the existing ‘four types of energy wave end boundary behaviors’, comprising, free end, fixed end boundary behaviors, as well as boundary behaviors from denser medium to less dense medium and from less dense medium to denser medium. Consequently, this article aims to demonstrate that the authentication and the choice of deity has an important effect on individual psychological, biological and social health. It is hoped that it will encourage new researches in the field of authentic energy network connections to establish the best position and the most correct interrelation connections with self and others without violating the authorized orders and the borders of one another to live happier and healthier lives together. In addition, the book ‘Deity and Freedom, Equality, Justice in History, Philosophy, Science’ has more detailed information for those interested in this subject.Keywords: deity, energy network, power, freedom, equality, justice, happiness, sadness, hope, fear, psychology, biology, sociology
Procedia PDF Downloads 34619902 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 10019901 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations
Authors: Zhao Gao, Eran Edirisinghe
Abstract:
The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.Keywords: RNN, GAN, NLP, facial composition, criminal investigation
Procedia PDF Downloads 16119900 Financial Intermediation: A Transaction Two-Sided Market Model Approach
Authors: Carlo Gozzelino
Abstract:
Since the early 2000s, the phenomenon of the two-sided markets has been of growing interest in academic literature as such kind of markets differs by having cross-side network effects and same-side network effects characterizing the transactions, which make the analysis different when compared to traditional seller-buyer concept. Due to such externalities, pricing strategies can be based on subsidizing the participation of one side (i.e. considered key for the platform to attract the other side) while recovering the loss on the other side. In recent years, several players of the Italian financial intermediation industry moved from an integrated landscape (i.e. selling their own products) to an open one (i.e. intermediating third party products). According to academic literature such behavior can be interpreted as a merchant move towards a platform, operating in a two-sided market environment. While several application of two-sided market framework are available in academic literature, purpose of this paper is to use a two-sided market concept to suggest a new framework applied to financial intermediation. To this extent, a model is developed to show how competitors behave when vertically integrated and how the peculiarities of a two-sided market act as an incentive to disintegrate. Additionally, we show that when all players act as a platform, the dynamics of a two-sided markets can allow at least a Nash equilibrium to exist, in which platform of different sizes enjoy positive profit. Finally, empirical evidences from Italian market are given to sustain – and to challenge – this interpretation.Keywords: financial intermediation, network externalities, two-sided markets, vertical differentiation
Procedia PDF Downloads 16019899 Consumption and Diffusion Based Model of Tissue Organoid Development
Authors: Elena Petersen, Inna Kornienko, Svetlana Guryeva, Sergey Simakov
Abstract:
In vitro organoid cultivation requires the simultaneous provision of necessary vascularization and nutrients perfusion of cells during organoid development. However, many aspects of this problem are still unsolved. The functionality of vascular network intergrowth is limited during early stages of organoid development since a function of the vascular network initiated on final stages of in vitro organoid cultivation. Therefore, a microchannel network should be created in early stages of organoid cultivation in hydrogel matrix aimed to conduct and maintain minimally required the level of nutrients perfusion for all cells in the expanding organoid. The network configuration should be designed properly in order to exclude hypoxic and necrotic zones in expanding organoid at all stages of its cultivation. In vitro vascularization is currently the main issue within the field of tissue engineering. As perfusion and oxygen transport have direct effects on cell viability and differentiation, researchers are currently limited only to tissues of few millimeters in thickness. These limitations are imposed by mass transfer and are defined by the balance between the metabolic demand of the cellular components in the system and the size of the scaffold. Current approaches include growth factor delivery, channeled scaffolds, perfusion bioreactors, microfluidics, cell co-cultures, cell functionalization, modular assembly, and in vivo systems. These approaches may improve cell viability or generate capillary-like structures within a tissue construct. Thus, there is a fundamental disconnect between defining the metabolic needs of tissue through quantitative measurements of oxygen and nutrient diffusion and the potential ease of integration into host vasculature for future in vivo implantation. A model is proposed for growth prognosis of the organoid perfusion based on joint simulations of general nutrient diffusion, nutrient diffusion to the hydrogel matrix through the contact surfaces and microchannels walls, nutrient consumption by the cells of expanding organoid, including biomatrix contraction during tissue development, which is associated with changed consumption rate of growing organoid cells. The model allows computing effective microchannel network design giving minimally required the level of nutrients concentration in all parts of growing organoid. It can be used for preliminary planning of microchannel network design and simulations of nutrients supply rate depending on the stage of organoid development.Keywords: 3D model, consumption model, diffusion, spheroid, tissue organoid
Procedia PDF Downloads 30819898 TQM Framework Using Notable Authors Comparative
Authors: Redha M. Elhuni
Abstract:
This paper presents an analysis of the essential characteristics of the TQM philosophy by comparing the work of five notable authors in the field. A framework is produced which gather the identified TQM enablers under the well-known operations management dimensions of process, business and people. These enablers are linked with sustainable development via balance scorecard type economic and non-economic measures. In order to capture a picture of Libyan Company’s efforts to implement the TQM, a questionnaire survey is designed and implemented. Results of the survey are presented showing the main differentiating factors between the sample companies, and a way of assessing the difference between the theoretical underpinning and the practitioners’ undertakings. Survey results indicate that companies are experiencing much difficulty in translating TQM theory into practice. Only a few companies have successfully adopted a holistic approach to TQM philosophy, and most of these put relatively high emphasis on hard elements compared with soft issues of TQM. However, where companies can realize the economic outputs, non- economic benefits such as workflow management, skills development and team learning are not realized. In addition, overall, non-economic measures have secured low weightings compared with the economic measures. We believe that the framework presented in this paper can help a company to concentrate its TQM implementation efforts in terms of process, system and people management dimensions.Keywords: TQM, balance scorecard, EFQM excellence model, oil sector, Libya
Procedia PDF Downloads 40519897 Game of Funds: Efficiency and Policy Implications of the United Kingdom Research Excellence Framework
Authors: Boon Lee
Abstract:
Research publication is an essential output of universities because it not only promotes university recognition, it also receives government funding. The history of university research culture has been one of ‘publish or perish’ and universities have consistently encouraged their academics and researchers to produce research articles in reputable journals in order to maintain a level of competitiveness. In turn, the United Kingdom (UK) government funding is determined by the number and quality of research publications. This paper aims to investigate on whether more government funding leads to more quality papers. To that end, the paper employs a Network DEA model to evaluate the UK higher education performance over a period. Sources of efficiency are also determined via second stage regression analysis.Keywords: efficiency, higher education, network data envelopment analysis, universities
Procedia PDF Downloads 11419896 Analysis and Identification of Different Factors Affecting Students’ Performance Using a Correlation-Based Network Approach
Authors: Jeff Chak-Fu Wong, Tony Chun Yin Yip
Abstract:
The transition from secondary school to university seems exciting for many first-year students but can be more challenging than expected. Enabling instructors to know students’ learning habits and styles enhances their understanding of the students’ learning backgrounds, allows teachers to provide better support for their students, and has therefore high potential to improve teaching quality and learning, especially in any mathematics-related courses. The aim of this research is to collect students’ data using online surveys, to analyze students’ factors using learning analytics and educational data mining and to discover the characteristics of the students at risk of falling behind in their studies based on students’ previous academic backgrounds and collected data. In this paper, we use correlation-based distance methods and mutual information for measuring student factor relationships. We then develop a factor network using the Minimum Spanning Tree method and consider further study for analyzing the topological properties of these networks using social network analysis tools. Under the framework of mutual information, two graph-based feature filtering methods, i.e., unsupervised and supervised infinite feature selection algorithms, are used to analyze the results for students’ data to rank and select the appropriate subsets of features and yield effective results in identifying the factors affecting students at risk of failing. This discovered knowledge may help students as well as instructors enhance educational quality by finding out possible under-performers at the beginning of the first semester and applying more special attention to them in order to help in their learning process and improve their learning outcomes.Keywords: students' academic performance, correlation-based distance method, social network analysis, feature selection, graph-based feature filtering method
Procedia PDF Downloads 12919895 Static and Dynamic Hand Gesture Recognition Using Convolutional Neural Network Models
Authors: Keyi Wang
Abstract:
Similar to the touchscreen, hand gesture based human-computer interaction (HCI) is a technology that could allow people to perform a variety of tasks faster and more conveniently. This paper proposes a training method of an image-based hand gesture image and video clip recognition system using a CNN (Convolutional Neural Network) with a dataset. A dataset containing 6 hand gesture images is used to train a 2D CNN model. ~98% accuracy is achieved. Furthermore, a 3D CNN model is trained on a dataset containing 4 hand gesture video clips resulting in ~83% accuracy. It is demonstrated that a Cozmo robot loaded with pre-trained models is able to recognize static and dynamic hand gestures.Keywords: deep learning, hand gesture recognition, computer vision, image processing
Procedia PDF Downloads 13919894 Femtocell Stationed Flawless Handover in High Agility Trains
Authors: S. Dhivya, M. Abirami, M. Farjana Parveen, M. Keerthiga
Abstract:
The development of high-speed railway makes people’s lives more and more convenient; meanwhile, handover is the major problem on high-speed railway communication services. In order to overcome that drawback the architecture of Long-Term Evolution (LTE) femtocell networks is used to improve network performance, and the deployment of a femtocell is a key for bandwidth limitation and coverage issues in conventional mobile network system. To increase the handover performance this paper proposed a multiple input multiple output (MIMO) assisted handoff (MAHO) algorithm. It is a technique used in mobile telecom to transfer a mobile phone to a new radio channel with stronger signal strength and improved channel quality.Keywords: flawless handover, high-speed train, home evolved Node B, LTE, mobile femtocell, RSS
Procedia PDF Downloads 47319893 Practical Techniques of Improving State Estimator Solution
Authors: Kiamran Radjabli
Abstract:
State Estimator became an intrinsic part of Energy Management Systems (EMS). The SCADA measurements received from the field are processed by the State Estimator in order to accurately determine the actual operating state of the power systems and provide that information to other real-time network applications. All EMS vendors offer a State Estimator functionality in their baseline products. However, setting up and ensuring that State Estimator consistently produces a reliable solution often consumes a substantial engineering effort. This paper provides generic recommendations and describes a simple practical approach to efficient tuning of State Estimator, based on the working experience with major EMS software platforms and consulting projects in many electrical utilities of the USA.Keywords: convergence, monitoring, state estimator, performance, troubleshooting, tuning, power systems
Procedia PDF Downloads 15619892 Chern-Simons Equation in Financial Theory and Time-Series Analysis
Authors: Ognjen Vukovic
Abstract:
Chern-Simons equation represents the cornerstone of quantum physics. The question that is often asked is if the aforementioned equation can be successfully applied to the interaction in international financial markets. By analysing the time series in financial theory, it is proved that Chern-Simons equation can be successfully applied to financial time-series. The aforementioned statement is based on one important premise and that is that the financial time series follow the fractional Brownian motion. All variants of Chern-Simons equation and theory are applied and analysed. Financial theory time series movement is, firstly, topologically analysed. The main idea is that exchange rate represents two-dimensional projections of three-dimensional Brownian motion movement. Main principles of knot theory and topology are applied to financial time series and setting is created so the Chern-Simons equation can be applied. As Chern-Simons equation is based on small particles, it is multiplied by the magnifying factor to mimic the real world movement. Afterwards, the following equation is optimised using Solver. The equation is applied to n financial time series in order to see if it can capture the interaction between financial time series and consequently explain it. The aforementioned equation represents a novel approach to financial time series analysis and hopefully it will direct further research.Keywords: Brownian motion, Chern-Simons theory, financial time series, econophysics
Procedia PDF Downloads 47319891 The Morphogenesis of an Informal Settlement: An Examination of Street Networks through the Informal Development Stages Framework
Authors: Judith Margaret Tymon
Abstract:
As cities struggle to incorporate informal settlements into the fabric of urban areas, the focus has often been on the provision of housing. This study explores the underlying structure of street networks, with the goal of understanding the morphogenesis of informal settlements through the lens of the access network. As the stages of development progress from infill to consolidation and eventually, to a planned in-situ settlement, the access networks retain the form of the core segments; however, a majority of street patterns are adapted to a grid design to support infrastructure in the final upgraded phase. A case study is presented to examine the street network in the informal settlement of Gobabis Namibia as it progresses from its initial stages to a planned, in-situ, and permanently upgraded development. The Informal Development Stages framework of foundation, infill, and consolidation, as developed by Dr. Jota Samper, is utilized to examine the evolution of street networks. Data is gathered from historical Google Earth satellite images for the time period between 2003 and 2022. The results demonstrate that during the foundation through infill stages, incremental changes follow similar patterns, with pathways extended, lengthened, and densified as housing is created and the settlement grows. In the final stage of consolidation, the resulting street layout is transformed to support the installation of infrastructure; however, some elements of the original street patterns remain. The core pathways remain intact to accommodate the installation of infrastructure and the creation of housing plots, defining the shape of the settlement and providing the basis of the urban form. The adaptations, growth, and consolidation of the street network are critical to the eventual formation of the spatial layout of the settlement. This study will include a comparative analysis of findings with those of recent research performed by Kamalipour, Dovey, and others regarding incremental urbanism within informal settlements. Further comparisons will also include studies of street networks of well-established urban centers that have shown links between the morphogenesis of access networks and the eventual spatial layout of the city. The findings of the study can be used to guide and inform strategies for in-situ upgrading and can contribute to the sustainable development of informal settlements.Keywords: Gobabis Namibia, incremental urbanism, informal development stages, informal settlements, street networks
Procedia PDF Downloads 6419890 Concept to Enhance the Project Success and Promote the Implementation of Success Factors in Infrastructure Projects
Abstract:
Infrastructure projects are often subjected to delays and cost overruns and mistakenly described as unsuccessful projects. These projects have many peculiarities such as public attention, impact on the environment, subjected to special regulations, etc. They also deal with several stakeholders with different motivations and face unique risks. With this in mind we need to reconsider our approach to manage them, define their success factors and implement these success factors. Infrastructure projects are not only lacking a unified meaning of project success or a definition of success factors, but also a clear method to implement these factors. This paper investigates this gap and introduces a concept to implement success factors in an efficient way, taking into consideration the specific characteristics of infrastructure projects. This concept consists of six enablers such as project organization, project team, project management workflow, contract management, communication and knowledge transfer and project documentations. These enablers allow other success factors to be efficiently implemented in projects. In conclusion, this paper provides project managers as well as company managers with a tool to define and implement success factors efficiently in their projects, along with upgrading their assets for the coming projects. This tool consists of processes and validated checklists to ensure the best use of company resources and knowledge. Due to the special features of infrastructure projects this tool will be tested in the German infrastructure market. However, it is meant to be adaptable to other markets and industries.Keywords: infrastructure projects, operative success factors, project success, success factors, transportation projects
Procedia PDF Downloads 12819889 Description of a Structural Health Monitoring and Control System Using Open Building Information Modeling
Authors: Wahhaj Ahmed Farooqi, Bilal Ahmad, Sandra Maritza Zambrano Bernal
Abstract:
In view of structural engineering, monitoring of structural responses over time is of great importance with respect to recent developments of construction technologies. Recently, developments of advanced computing tools have enabled researcher’s better execution of structural health monitoring (SHM) and control systems. In the last decade, building information modeling (BIM) has substantially enhanced the workflow of planning and operating engineering structures. Typically, building information can be stored and exchanged via model files that are based on the Industry Foundation Classes (IFC) standard. In this study a modeling approach for semantic modeling of SHM and control systems is integrated into the BIM methodology using the IFC standard. For validation of the modeling approach, a laboratory test structure, a four-story shear frame structure, is modeled using a conventional BIM software tool. An IFC schema extension is applied to describe information related to monitoring and control of a prototype SHM and control system installed on the laboratory test structure. The SHM and control system is described by a semantic model applying Unified Modeling Language (UML). Subsequently, the semantic model is mapped into the IFC schema. The test structure is composed of four aluminum slabs and plate-to-column connections are fully fixed. In the center of the top story, semi-active tuned liquid column damper (TLCD) is installed. The TLCD is used to reduce effects of structural responses in context of dynamic vibration and displacement. The wireless prototype SHM and control system is composed of wireless sensor nodes. For testing the SHM and control system, acceleration response is automatically recorded by the sensor nodes equipped with accelerometers and analyzed using embedded computing. As a result, SHM and control systems can be described within open BIM, dynamic responses and information of damages can be stored, documented, and exchanged on the formal basis of the IFC standard.Keywords: structural health monitoring, open building information modeling, industry foundation classes, unified modeling language, semi-active tuned liquid column damper, nondestructive testing
Procedia PDF Downloads 15119888 A Comparison between Fuzzy Analytic Hierarchy Process and Fuzzy Analytic Network Process for Rationality Evaluation of Land Use Planning Locations in Vietnam
Authors: X. L. Nguyen, T. Y. Chou, F. Y. Min, F. C. Lin, T. V. Hoang, Y. M. Huang
Abstract:
In Vietnam, land use planning is utilized as an efficient tool for the local government to adjust land use. However, planned locations are facing disapproval from people who live near these planned sites because of environmental problems. The selection of these locations is normally based on the subjective opinion of decision-makers and is not supported by any scientific methods. Many researchers have applied Multi-Criteria Analysis (MCA) methods in which Analytic Hierarchy Process (AHP) is the most popular techniques in combination with Fuzzy set theory for the subject of rationality assessment of land use planning locations. In this research, the Fuzzy set theory and Analytic Network Process (ANP) multi-criteria-based technique were used for the assessment process. The Fuzzy Analytic Hierarchy Process was also utilized, and the output results from two methods were compared to extract the differences. The 20 planned landfills in Hung Ha district, Thai Binh province, Vietnam was selected as a case study. The comparison results indicate that there are different between weights computed by AHP and ANP methods and the assessment outputs produced from these two methods also slight differences. After evaluation of existing planned sites, some potential locations were suggested to the local government for possibility of land use planning adjusts.Keywords: Analytic Hierarchy Process, Analytic Network Process, Fuzzy set theory, land use planning
Procedia PDF Downloads 42119887 Rheolaser: Light Scattering Characterization of Viscoelastic Properties of Hair Cosmetics That Are Related to Performance and Stability of the Respective Colloidal Soft Materials
Authors: Heitor Oliveira, Gabriele De-Waal, Juergen Schmenger, Lynsey Godfrey, Tibor Kovacs
Abstract:
Rheolaser MASTER™ makes use of multiple scattering of light, caused by scattering objects in a continuous medium (such as droplets and particles in colloids), to characterize the viscoelasticity of soft materials. It offers an alternative to conventional rheometers to characterize viscoelasticity of products such as hair cosmetics. Up to six simultaneous measurements at controlled temperature can be carried out simultaneously (10-15 min), and the method requires only minor sample preparation work. Conversely to conventional rheometer based methods, no mechanical stress is applied to the material during the measurements. Therefore, the properties of the exact same sample can be monitored over time, like in aging and stability studies. We determined the elastic index (EI) of water/emulsion mixtures (1 ≤ fat alcohols (FA) ≤ 5 wt%) and emulsion/gel-network mixtures (8 ≤ FA ≤ 17 wt%) and compared with the elastic/sorage mudulus (G’) for the respective samples using a TA conventional rheometer with flat plates geometry. As expected, it was found that log(EI) vs log(G’) presents a linear behavior. Moreover, log(EI) increased in a linear fashion with solids level in the entire range of compositions (1 ≤ FA ≤ 17 wt%), while rheometer measurements were limited to samples down to 4 wt% solids level. Alternatively, a concentric cilinder geometry would be required for more diluted samples (FA > 4 wt%) and rheometer results from different sample holder geometries are not comparable. The plot of the rheolaser output parameters solid-liquid balance (SLB) vs EI were suitable to monitor product aging processes. These data could quantitatively describe some observations such as formation of lumps over aging time. Moreover, this method allowed to identify that the different specifications of a key raw material (RM < 0.4 wt%) in the respective gel-network (GN) product has minor impact on product viscoelastic properties and it is not consumer perceivable after a short aging time. Broadening of a RM spec range typically has a positive impact on cost savings. Last but not least, the photon path length (λ*)—proportional to droplet size and inversely proportional to volume fraction of scattering objects, accordingly to the Mie theory—and the EI were suitable to characterize product destabilization processes (e.g., coalescence and creaming) and to predict product stability about eight times faster than our standard methods. Using these parameters we could successfully identify formulation and process parameters that resulted in unstable products. In conclusion, Rheolaser allows quick and reliable characterization of viscoelastic properties of hair cosmetics that are related to their performance and stability. It operates in a broad range of product compositions and has applications spanning from the formulation of our hair cosmetics to fast release criteria in our production sites. Last but not least, this powerful tool has positive impact on R&D development time—faster delivery of new products to the market—and consequently on cost savings.Keywords: colloids, hair cosmetics, light scattering, performance and stability, soft materials, viscoelastic properties
Procedia PDF Downloads 17219886 Unsupervised Images Generation Based on Sloan Digital Sky Survey with Deep Convolutional Generative Neural Networks
Authors: Guanghua Zhang, Fubao Wang, Weijun Duan
Abstract:
Convolution neural network (CNN) has attracted more and more attention on recent years. Especially in the field of computer vision and image classification. However, unsupervised learning with CNN has received less attention than supervised learning. In this work, we use a new powerful tool which is deep convolutional generative adversarial networks (DCGANs) to generate images from Sloan Digital Sky Survey. Training by various star and galaxy images, it shows that both the generator and the discriminator are good for unsupervised learning. In this paper, we also took several experiments to choose the best value for hyper-parameters and which could help to stabilize the training process and promise a good quality of the output.Keywords: convolution neural network, discriminator, generator, unsupervised learning
Procedia PDF Downloads 268