Search results for: space detection to first responders
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6999

Search results for: space detection to first responders

5169 Location Choice: The Effects of Network Configuration upon the Distribution of Economic Activities in the Chinese City of Nanning

Authors: Chuan Yang, Jing Bie, Zhong Wang, Panagiotis Psimoulis

Abstract:

Contemporary studies investigating the association between the spatial configuration of the urban network and economic activities at the street level were mostly conducted within space syntax conceptual framework. These findings supported the theory of 'movement economy' and demonstrated the impact of street configuration on the distribution of pedestrian movement and land-use shaping, especially retail activities. However, the effects varied between different urban contexts. In this paper, the relationship between economic activity distribution and the urban configurational characters was examined at the segment level. In the study area, three kinds of neighbourhood types, urban, suburban, and rural neighbourhood, were included. And among all neighbourhoods, three kinds of urban network form, 'tree-like', grid, and organic pattern, were recognised. To investigate the nested effects of urban configuration measured by space syntax approach and urban context, multilevel zero-inflated negative binomial (ZINB) regression models were constructed. Additionally, considering the spatial autocorrelation, spatial lag was also concluded in the model as an independent variable. The random effect ZINB model shows superiority over the ZINB model or multilevel linear (ML) model in the explanation of economic activities pattern shaping over the urban environment. And after adjusting for the neighbourhood type and network form effects, connectivity and syntax centrality significantly affect economic activities clustering. The comparison between accumulative and new established economic activities illustrated the different preferences for economic activity location choice.

Keywords: space syntax, economic activities, multilevel model, Chinese city

Procedia PDF Downloads 122
5168 Surface Elevation Dynamics Assessment Using Digital Elevation Models, Light Detection and Ranging, GPS and Geospatial Information Science Analysis: Ecosystem Modelling Approach

Authors: Ali K. M. Al-Nasrawi, Uday A. Al-Hamdany, Sarah M. Hamylton, Brian G. Jones, Yasir M. Alyazichi

Abstract:

Surface elevation dynamics have always responded to disturbance regimes. Creating Digital Elevation Models (DEMs) to detect surface dynamics has led to the development of several methods, devices and data clouds. DEMs can provide accurate and quick results with cost efficiency, in comparison to the inherited geomatics survey techniques. Nowadays, remote sensing datasets have become a primary source to create DEMs, including LiDAR point clouds with GIS analytic tools. However, these data need to be tested for error detection and correction. This paper evaluates various DEMs from different data sources over time for Apple Orchard Island, a coastal site in southeastern Australia, in order to detect surface dynamics. Subsequently, 30 chosen locations were examined in the field to test the error of the DEMs surface detection using high resolution global positioning systems (GPSs). Results show significant surface elevation changes on Apple Orchard Island. Accretion occurred on most of the island while surface elevation loss due to erosion is limited to the northern and southern parts. Concurrently, the projected differential correction and validation method aimed to identify errors in the dataset. The resultant DEMs demonstrated a small error ratio (≤ 3%) from the gathered datasets when compared with the fieldwork survey using RTK-GPS. As modern modelling approaches need to become more effective and accurate, applying several tools to create different DEMs on a multi-temporal scale would allow easy predictions in time-cost-frames with more comprehensive coverage and greater accuracy. With a DEM technique for the eco-geomorphic context, such insights about the ecosystem dynamic detection, at such a coastal intertidal system, would be valuable to assess the accuracy of the predicted eco-geomorphic risk for the conservation management sustainability. Demonstrating this framework to evaluate the historical and current anthropogenic and environmental stressors on coastal surface elevation dynamism could be profitably applied worldwide.

Keywords: DEMs, eco-geomorphic-dynamic processes, geospatial Information Science, remote sensing, surface elevation changes,

Procedia PDF Downloads 266
5167 Robust and Dedicated Hybrid Cloud Approach for Secure Authorized Deduplication

Authors: Aishwarya Shekhar, Himanshu Sharma

Abstract:

Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. In this process, duplicate data is expunged, leaving only one copy means single instance of the data to be accumulated. Though, indexing of each and every data is still maintained. Data deduplication is an approach for minimizing the part of storage space an organization required to retain its data. In most of the company, the storage systems carry identical copies of numerous pieces of data. Deduplication terminates these additional copies by saving just one copy of the data and exchanging the other copies with pointers that assist back to the primary copy. To ignore this duplication of the data and to preserve the confidentiality in the cloud here we are applying the concept of hybrid nature of cloud. A hybrid cloud is a fusion of minimally one public and private cloud. As a proof of concept, we implement a java code which provides security as well as removes all types of duplicated data from the cloud.

Keywords: confidentiality, deduplication, data compression, hybridity of cloud

Procedia PDF Downloads 377
5166 Modular Power Bus for Space Vehicles (MPBus)

Authors: Eduardo Remirez, Luis Moreno

Abstract:

The rapid growth of the private satellite launchers sector is leading the space race. Hence, with the privatization of the sector, all the companies are racing for a more efficient and reliant way to set satellites in orbit. Having detected the current needs for power management in the launcher vehicle industry, the Modular Power Bus is proposed as a technology to revolutionize power management in current and future Launcher Vehicles. The MPBus Project is committed to develop a new power bus architecture combining ejectable batteries with the main bus through intelligent nodes. These nodes are able to communicate between them and a battery controller using an improved, data over DC line technology, expected to reduce the total weight in two main areas: improving the use of the batteries and reducing the total weight due to harness. This would result in less weight for each launch stage increasing the operational satellite payload and reducing cost. These features make the system suitable for a number of launchers.

Keywords: modular power bus, Launcher vehicles, ejectable batteries, intelligent nodes

Procedia PDF Downloads 477
5165 Nano-Sensors: Search for New Features

Authors: I. Filikhin, B. Vlahovic

Abstract:

We focus on a novel type of detection based on electron tunneling properties of double nanoscale structures in semiconductor materials. Semiconductor heterostructures as quantum wells (QWs), quantum dots (QDs), and quantum rings (QRs) may have energy level structure of several hundred of electron confinement states. The single electron spectra of the double quantum objects (DQW, DQD, and DQR) were studied in our previous works with relation to the electron localization and tunneling between the objects. The wave function of electron may be localized in one of the QDs or be delocalized when it is spread over the whole system. The localizing-delocalizing tunneling occurs when an electron transition between both states is possible. The tunneling properties of spectra differ strongly for “regular” and “chaotic” systems. We have shown that a small violation of the geometry drastically affects localization of electron. In particular, such violations lead to the elimination of the delocalized states of the system. The same symmetry violation effect happens if electrical or magnetic fields are applied. These phenomena could be used to propose a new type of detection based on the high sensitivity of charge transport between double nanostructures and small violations of the shapes. It may have significant technological implications.

Keywords: double quantum dots, single electron levels, tunneling, electron localizations

Procedia PDF Downloads 500
5164 Language Errors Used in “The Space between Us” Movie and Their Effects on Translation Quality: Translation Study toward Discourse Analysis Approach

Authors: Mochamad Nuruz Zaman, Mangatur Rudolf Nababan, M. A. Djatmika

Abstract:

Both society and education areas teach to have good communication for building the interpersonal skills up. Everyone has the capacity to understand something new, either well comprehension or worst understanding. Worst understanding makes the language errors when the interactions are done by someone in the first meeting, and they do not know before it because of distance area. “The Space between Us” movie delivers the love-adventure story between Mars Boy and Earth Girl. They are so many missing conversations because of the different climate and environment. As the moviegoer also must be focused on the subtitle in order to enjoy well the movie. Furthermore, Indonesia subtitle and English conversation on the movie still have overlapping understanding in the translation. Translation hereby consists of source language -SL- (English conversation) and target language -TL- (Indonesia subtitle). These research gap above is formulated in research question by how the language errors happened in that movie and their effects on translation quality which is deepest analyzed by translation study toward discourse analysis approach. The research goal is to expand the language errors and their translation qualities in order to create a good atmosphere in movie media. The research is studied by embedded research in qualitative design. The research locations consist of setting, participant, and event as focused determined boundary. Sources of datum are “The Space between Us” movie and informant (translation quality rater). The sampling is criterion-based sampling (purposive sampling). Data collection techniques use content analysis and questioner. Data validation applies data source and method triangulation. Data analysis delivers domain, taxonomy, componential, and cultural theme analysis. Data findings on the language errors happened in the movie are referential, register, society, textual, receptive, expressive, individual, group, analogical, transfer, local, and global errors. Data discussions on their effects to translation quality are concentrated by translation techniques on their data findings; they are amplification, borrowing, description, discursive creation, established equivalent, generalization, literal, modulation, particularization, reduction, substitution, and transposition.

Keywords: discourse analysis, language errors, The Space between Us movie, translation techniques, translation quality instruments

Procedia PDF Downloads 217
5163 The Reach of Shopping Center Layout Form on Subway Based on Kernel Density Estimate

Authors: Wen Liu

Abstract:

With the rapid progress of modern cities, the railway construction must be developing quickly in China. As a typical high-density country, shopping center on the subway should be one important factor during the process of urban development. The paper discusses the influence of the layout of shopping center on the subway, and put it in the time and space’s axis of Shanghai urban development. We use the digital technology to establish the database of relevant information. And then get the change role about shopping center on subway in Shanghaiby the Kernel density estimate. The result shows the development of shopping center on subway has a relationship with local economic strength, population size, policy support, and city construction. And the suburbanization trend of shopping center would be increasingly significant. By this case research, we could see the Kernel density estimate is an efficient analysis method on the spatial layout. It could reveal the characters of layout form of shopping center on subway in essence. And it can also be applied to the other research of space form.

Keywords: Shanghai, shopping center on the subway, layout form, Kernel density estimate

Procedia PDF Downloads 311
5162 Effect of Urbanization on Basic Environmental Components

Authors: Sehba Saleem

Abstract:

A country with a spread of only 2.4 percent of the total land surface area of the world, India is home to 17.5 percent of the world population. This fact is sufficient enough to delineate as well as simultaneously bringing to fore the paradox which exists between land and human population. It is evident that the relation which exists between both is an unequal one where the latter has the ability to multiply self, but the former remains constant. This unequal relation that exists has very significantly contributed to the depletion in the quality of land. This is because construction of every kind and nature has been forced on the land to assimilate the ever increasing population which has altered the not only the land but the environment which existed on the land. To get behind this alteration, it becomes imperative to delve into concepts like urbanization, ecology and their amalgam viz. urban ecology. The concept of urban ecology does not only involve study of buildings, flora, and fauna which exists in a given land space. It goes further into establishing a relation between construction on land and the consequent harm, which the same is causing to the environmental resources like air, water etc. This paper shall try cerebrating concepts of urbanization, ecology and urban ecology in the light of relation which exists between man and nature.

Keywords: asymmetrical growth, environment, urbanisation, urban space

Procedia PDF Downloads 327
5161 Disaster Management Using Wireless Sensor Networks

Authors: Akila Murali, Prithika Manivel

Abstract:

Disasters are defined as a serious disruption of the functioning of a community or a society, which involves widespread human, material, economic or environmental impacts. The number of people suffering food crisis as a result of natural disasters has tripled in the last thirty years. The economic losses due to natural disasters have shown an increase with a factor of eight over the past four decades, caused by the increased vulnerability of the global society, and also due to an increase in the number of weather-related disasters. Efficient disaster detection and alerting systems could reduce the loss of life and properties. In the event of a disaster, another important issue is a good search and rescue system with high levels of precision, timeliness and safety for both the victims and the rescuers. Wireless Sensor Networks technology has the capability of quick capturing, processing, and transmission of critical data in real-time with high resolution. This paper studies the capacity of sensors and a Wireless Sensor Network to collect, collate and analyze valuable and worthwhile data, in an ordered manner to help with disaster management.

Keywords: alerting systems, disaster detection, Ad Hoc network, WSN technology

Procedia PDF Downloads 403
5160 Assessment of Airtightness Through a Standardized Procedure in a Nearly-Zero Energy Demand House

Authors: Mar Cañada Soriano, Rafael Royo-Pastor, Carolina Aparicio-Fernández, Jose-Luis Vivancos

Abstract:

The lack of insulation, along with the existence of air leakages, constitute a meaningful impact on the energy performance of buildings. Both of them lead to increases in the energy demand through additional heating and/or cooling loads. Additionally, they cause thermal discomfort. In order to quantify these uncontrolled air currents, pressurization and depressurization tests can be performed. Among them, the Blower Door test is a standardized procedure to determine the airtightness of a space which characterizes the rate of air leakages through the envelope surface, calculating to this purpose an air flow rate indicator. In this sense, the low-energy buildings complying with the Passive House design criteria are required to achieve high levels of airtightness. Due to the invisible nature of air leakages, additional tools are often considered to identify where the infiltrations take place. Among them, the infrared thermography entails a valuable technique to this purpose since it enables their detection. The aim of this study is to assess the airtightness of a typical Mediterranean dwelling house located in the Valencian orchad (Spain) restored under the Passive House standard using to this purpose the blower-door test. Moreover, the building energy performance modelling tools TRNSYS (TRaNsient System Simulation program) and TRNFlow (TRaNsient Flow) have been used to determine its energy performance, and the infiltrations’ identification was carried out by means of infrared thermography. The low levels of infiltrations obtained suggest that this house may comply with the Passive House standard.

Keywords: airtightness, blower door, trnflow, infrared thermography

Procedia PDF Downloads 120
5159 Contextual Toxicity Detection with Data Augmentation

Authors: Julia Ive, Lucia Specia

Abstract:

Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.

Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing

Procedia PDF Downloads 167
5158 An Analysis of the Panel’s Perceptions on Cooking in “Metaverse Kitchen”

Authors: Minsun Kim

Abstract:

This study uses the concepts of augmented reality, virtual reality, mirror world, and lifelogging to describe “Metaverse Kitchen” that can be defined as a space in the virtual world where users can cook the dishes they want using the meal kit regardless of location or time. This study examined expert’s perceptions of cooking and food delivery services using "Metaverse Kitchen." In this study, a consensus opinion on the concept, potential pros, and cons of "Metaverse Kitchen" was derived from 20 culinary experts through the Delphi technique. The three Delphi rounds were conducted for one month, from December 2022 to January 2023. The results are as follows. First, users select and cook food after visiting the "Metaverse Kitchen" in the virtual space. Second, when a user cooks in "Metaverse Kitchen" in AR or VR, the information is transmitted to nearby restaurants. Third, the platform operating the "Metaverse Kitchen" assigns the order to the restaurant that can provide the meal kit cooked by the user in the virtual space first in the same way among these restaurants. Fourth, the user pays for the "Metaverse Kitchen", and the restaurant delivers the cooked meal kit to the user and then receives payment for the user's meal and delivery fee from the platform. Fifth, the platform company that operates the mirror world "Metaverse Kitchen" uses lifelogging to manage customers. They receive commissions from users and affiliated restaurants and operate virtual restaurant businesses using meal kits. Among the selection attributes for meal kits provided in "Metaverse Kitchen", the panelists suggested convenience, quality, and reliability as advantages and predicted relatively high price as a disadvantage. "Metaverse Kitchen" using meal kits is expected to form a new food supply system in the future society. In follow-up studies, an empirical analysis is required targeting producers and consumers.

Keywords: metaverse, meal kits, Delphi technique, Metaverse Kitchen

Procedia PDF Downloads 216
5157 Comparison of Techniques for Detection and Diagnosis of Eccentricity in the Air-Gap Fault in Induction Motors

Authors: Abrahão S. Fontes, Carlos A. V. Cardoso, Levi P. B. Oliveira

Abstract:

The induction motors are used worldwide in various industries. Several maintenance techniques are applied to increase the operating time and the lifespan of these motors. Among these, the predictive maintenance techniques such as Motor Current Signature Analysis (MCSA), Motor Square Current Signature Analysis (MSCSA), Park's Vector Approach (PVA) and Park's Vector Square Modulus (PVSM) are used to detect and diagnose faults in electric motors, characterized by patterns in the stator current frequency spectrum. In this article, these techniques are applied and compared on a real motor, which has the fault of eccentricity in the air-gap. It was used as a theoretical model of an electric induction motor without fault in order to assist comparison between the stator current frequency spectrum patterns with and without faults. Metrics were purposed and applied to evaluate the sensitivity of each technique fault detection. The results presented here show that the above techniques are suitable for the fault of eccentricity in the air gap, whose comparison between these showed the suitability of each one.

Keywords: eccentricity in the air-gap, fault diagnosis, induction motors, predictive maintenance

Procedia PDF Downloads 343
5156 Detection of Biomechanical Stress for the Prevention of Disability Derived from Musculoskeletal Disorders

Authors: Leydi Noemi Peraza Gómez, Jose Álvarez Nemegyei, Damaris Francis Estrella Castillo

Abstract:

In order to have an epidemiological tool to detect biomechanical stress (ERGO-Mex), which impose physical labor or recreational activities, a questionnaire is constructed in Spanish, validated and culturally adapted to the Mayan indigenous population of Yucatan. Through the seven steps proposed by Guillemin and Beaton the procedure was: initial translation, synthesis of the translations, feed back of the translation. After that review by a committee of experts, pre-test of the preliminary version, and presentation of the results to the committee of experts and members of the community. Finally the evaluation of its internal validity (Cronbach's α coefficient) and external (intraclass correlation coefficient). The results for the validation in Spanish indicated that 45% of the participants have biomechanical stress. The ERGO-Mex correlation was 0.69 (p <0.0001). Subjects with high biomechanical stress had a higher score than subjects with low biomechanical stress (17.4 ± 8.9 vs.9.8 ± 2.8, p = 0.003). The Cronbach's α coefficient was 0.92; and for validation in Cronbach's α maya it was 0.82 and CCI = 0.70 (95% CI: 0.58-0.79; p˂0.0001); ERGO-Mex is suitable for performing early detection of musculoskeletal diseases and helping to prevent disability.

Keywords: biomechanical stress, disability, musculoskeletal disorders, prevention

Procedia PDF Downloads 175
5155 Deep Learning Based Fall Detection Using Simplified Human Posture

Authors: Kripesh Adhikari, Hamid Bouchachia, Hammadi Nait-Charif

Abstract:

Falls are one of the major causes of injury and death among elderly people aged 65 and above. A support system to identify such kind of abnormal activities have become extremely important with the increase in ageing population. Pose estimation is a challenging task and to add more to this, it is even more challenging when pose estimations are performed on challenging poses that may occur during fall. Location of the body provides a clue where the person is at the time of fall. This paper presents a vision-based tracking strategy where available joints are grouped into three different feature points depending upon the section they are located in the body. The three feature points derived from different joints combinations represents the upper region or head region, mid-region or torso and lower region or leg region. Tracking is always challenging when a motion is involved. Hence the idea is to locate the regions in the body in every frame and consider it as the tracking strategy. Grouping these joints can be beneficial to achieve a stable region for tracking. The location of the body parts provides a crucial information to distinguish normal activities from falls.

Keywords: fall detection, machine learning, deep learning, pose estimation, tracking

Procedia PDF Downloads 183
5154 Deep Learning based Image Classifiers for Detection of CSSVD in Cacao Plants

Authors: Atuhurra Jesse, N'guessan Yves-Roland Douha, Pabitra Lenka

Abstract:

The detection of diseases within plants has attracted a lot of attention from computer vision enthusiasts. Despite the progress made to detect diseases in many plants, there remains a research gap to train image classifiers to detect the cacao swollen shoot virus disease or CSSVD for short, pertinent to cacao plants. This gap has mainly been due to the unavailability of high quality labeled training data. Moreover, institutions have been hesitant to share their data related to CSSVD. To fill these gaps, image classifiers to detect CSSVD-infected cacao plants are presented in this study. The classifiers are based on VGG16, ResNet50 and Vision Transformer (ViT). The image classifiers are evaluated on a recently released and publicly accessible KaraAgroAI Cocoa dataset. The best performing image classifier, based on ResNet50, achieves 95.39\% precision, 93.75\% recall, 94.34\% F1-score and 94\% accuracy on only 20 epochs. There is a +9.75\% improvement in recall when compared to previous works. These results indicate that the image classifiers learn to identify cacao plants infected with CSSVD.

Keywords: CSSVD, image classification, ResNet50, vision transformer, KaraAgroAI cocoa dataset

Procedia PDF Downloads 96
5153 A Framework for Automated Nuclear Waste Classification

Authors: Seonaid Hume, Gordon Dobie, Graeme West

Abstract:

Detecting and localizing radioactive sources is a necessity for safe and secure decommissioning of nuclear facilities. An important aspect for the management of the sort-and-segregation process is establishing the spatial distributions and quantities of the waste radionuclides, their type, corresponding activity, and ultimately classification for disposal. The data received from surveys directly informs decommissioning plans, on-site incident management strategies, the approach needed for a new cell, as well as protecting the workforce and the public. Manual classification of nuclear waste from a nuclear cell is time-consuming, expensive, and requires significant expertise to make the classification judgment call. Also, in-cell decommissioning is still in its relative infancy, and few techniques are well-developed. As with any repetitive and routine tasks, there is the opportunity to improve the task of classifying nuclear waste using autonomous systems. Hence, this paper proposes a new framework for the automatic classification of nuclear waste. This framework consists of five main stages; 3D spatial mapping and object detection, object classification, radiological mapping, source localisation based on gathered evidence and finally, waste classification. The first stage of the framework, 3D visual mapping, involves object detection from point cloud data. A review of related applications in other industries is provided, and recommendations for approaches for waste classification are made. Object detection focusses initially on cylindrical objects since pipework is significant in nuclear cells and indeed any industrial site. The approach can be extended to other commonly occurring primitives such as spheres and cubes. This is in preparation of stage two, characterizing the point cloud data and estimating the dimensions, material, degradation, and mass of the objects detected in order to feature match them to an inventory of possible items found in that nuclear cell. Many items in nuclear cells are one-offs, have limited or poor drawings available, or have been modified since installation, and have complex interiors, which often and inadvertently pose difficulties when accessing certain zones and identifying waste remotely. Hence, this may require expert input to feature match objects. The third stage, radiological mapping, is similar in order to facilitate the characterization of the nuclear cell in terms of radiation fields, including the type of radiation, activity, and location within the nuclear cell. The fourth stage of the framework takes the visual map for stage 1, the object characterization from stage 2, and radiation map from stage 3 and fuses them together, providing a more detailed scene of the nuclear cell by identifying the location of radioactive materials in three dimensions. The last stage involves combining the evidence from the fused data sets to reveal the classification of the waste in Bq/kg, thus enabling better decision making and monitoring for in-cell decommissioning. The presentation of the framework is supported by representative case study data drawn from an application in decommissioning from a UK nuclear facility. This framework utilises recent advancements of the detection and mapping capabilities of complex radiation fields in three dimensions to make the process of classifying nuclear waste faster, more reliable, cost-effective and safer.

Keywords: nuclear decommissioning, radiation detection, object detection, waste classification

Procedia PDF Downloads 198
5152 Anti-TNF: Possibilities of Rising Anti-Phosphorylcholine Antibodies

Authors: Md. Mizanur Rahman, Anquan Liu, Anna Frostegård, Johan Frostegård

Abstract:

The role of the human immune system is essential in cardiovascular diseases and atherosclerosis. Activated cells in atherosclerosis produce abundant amounts of cytokines, but the exact mechanisms involved in the effects of these inflammatory cytokines are not clear in atherosclerosis. In a large clinical cohort, we have previously determined that antibodies against phosphorylcholine (anti-PC) are negatively and independently associated with both development of atherosclerosis and also a low risk of cardiovascular disease. Further, we reported that rheumatoid arthritis patients who were non-responders to TNF-inhibitors, where those with low anti-PC levels. Upon anti-TNF treatment, anti-PC levels increased. We, therefore, hypothesised that proinflammatory cytokines such as TNF could play a role in anti-PC regulation. Peripheral blood mononuclear cells (PBMC) were cultured with or without TNF and anti-TNF. The cell supernatants were collected after six days for ELISA measurements. In separate experiments, cells were cultured for 24 hours in both polystyrene plates and ELISPOT plates under a similar condition for ELISA and ELISPOT assays respectively. Total RNA was extracted after 6 hours of cell culture to perform RT-qPCR. Cell viability was confirmed by trypan blue staining and MTT assays. ELISA measurements detected less than 40% of anti-PC in TNF-treated cells, in comparison to control cells, whereas anti-PC production was recovered by anti-TNF treatment. ELISPOT assays showed that TNF suppresses anti-PC production by inhibiting anti-PC producing B-cells. In addition, RT-qPCR and ELISA showed that TNF also has effects also on B-cell activation as BAFF expression was inhibited by TNF treatment. Atherosclerosis is a major cause of cardiovascular diseases, but anti-PC is a protection marker for atherosclerosis development. Our findings show that TNF is a negative regulator of anti-PC production. Immune modulation and rising of anti-PC could be of major significance for the patients.

Keywords: anti-PC, Anti-TNF, atherosclerosis, cardiovascular diseases, phosphorylecholine

Procedia PDF Downloads 240
5151 DISGAN: Efficient Generative Adversarial Network-Based Method for Cyber-Intrusion Detection

Authors: Hongyu Chen, Li Jiang

Abstract:

Ubiquitous anomalies endanger the security of our system con- stantly. They may bring irreversible damages to the system and cause leakage of privacy. Thus, it is of vital importance to promptly detect these anomalies. Traditional supervised methods such as Decision Trees and Support Vector Machine (SVM) are used to classify normality and abnormality. However, in some case, the abnormal status are largely rarer than normal status, which leads to decision bias of these methods. Generative adversarial network (GAN) has been proposed to handle the case. With its strong generative ability, it only needs to learn the distribution of normal status, and identify the abnormal status through the gap between it and the learned distribution. Nevertheless, existing GAN-based models are not suitable to process data with discrete values, leading to immense degradation of detection performance. To cope with the discrete features, in this paper, we propose an efficient GAN-based model with specifically-designed loss function. Experiment results show that our model outperforms state-of-the-art models on discrete dataset and remarkably reduce the overhead.

Keywords: GAN, discrete feature, Wasserstein distance, multiple intermediate layers

Procedia PDF Downloads 127
5150 House Extension Strategy in High-Density Informal Settlement: A Case Study in Kampung Cikini, Jakarta, Indonesia

Authors: Meidesta Pitria, Akiko Okabe

Abstract:

In high-density informal settlement, extension area at the outside of the houses could primarily happen as a spatial modification response. House extension in high-density informal settlement is not only becoming a physical spatial modification that makes a blur zone between private and public but also supporting the growth and existence of informal economy and other daily activities in both individuals and communities. This research took a case study in an informal settlement named Kampung Cikini, a densely populated area in Central Jakarta. The aim of this study is to identify and clarify house extension as a strategy in dealing with urbanization in an informal settlement. By using the perspective and information from housewives, the analysis is based on the assumption that land ownership transformation and the activities in house extension area influence the different kinds of house extension’s spatial modification and local planning policy in relation with the implementation of house extension strategy. The data collection was done in four sites, two sites are located in outer-wide alley and another two sites are located in inner-narrow alley. In this research, data of 104 housewives in 86 houses were collected through representatives of housewives and local leader of each sites. The research was started from participatory mapping process, deep interview with local leader, and initiated collaboration with housewives community in having a celebration as communal event to cultivate together the issue. This study shows that land ownership, activities, and alley are indispensable in the decision of extension space making. The more permanency status of land ownership the more permanent and various extension could be implemented. However, in some blocks, the existence of origin house or first land owner also has a significant role in coordination and agreement in using and modifying extension space. In outer-wide alley, the existence of more various activities in front area of the houses is significantly related with the chance given by having wider alley, particularly for informal income generating activities. In the inner-narrow alley, limited space in front of the houses affects more negotiations in the community for having more shared spaces, even inside their private space.

Keywords: house extension, housewives, informal settlement, kampung, high density

Procedia PDF Downloads 203
5149 Global Stability Analysis of a Coupled Model for Healthy and Cancerous Cells Dynamics in Acute Myeloid Leukemia

Authors: Abdelhafid Zenati, Mohamed Tadjine

Abstract:

The mathematical formulation of biomedical problems is an important phase to understand and predict the dynamic of the controlled population. In this paper we perform a stability analysis of a coupled model for healthy and cancerous cells dynamics in Acute Myeloid Leukemia, this represents our first aim. Second, we illustrate the effect of the interconnection between healthy and cancer cells. The PDE-based model is transformed to a nonlinear distributed state space model (delay system). For an equilibrium point of interest, necessary and sufficient conditions of global asymptotic stability are given. Thus, we came up to give necessary and sufficient conditions of global asymptotic stability of the origin and the healthy situation and control of the dynamics of normal hematopoietic stem cells and cancerous during myelode Acute leukemia. Simulation studies are given to illustrate the developed results.

Keywords: distributed delay, global stability, modelling, nonlinear models, PDE, state space

Procedia PDF Downloads 249
5148 Research on the Public Governance of Urban Public Green Spaces from the Perspective of Institutional Economics

Authors: Zhang Xue

Abstract:

Urban public green spaces have evolved from classical private gardens and have expanded into multi-dimensional space value attributes such as scale and property rights. Among them, ecological, environmental value, social interaction value, and commercial, economic value have become consensual value characteristics. From the perspective of institutional economics, urban public green spaces, as a type of non-exclusive and non-competitive public good, express the social connotation of spatial "publicness" and multiple values are its important attributes. However, due to the positive externality characteristics of public green spaces, the cost-benefit functions between subjects are inconsistent, leading to issues such as the "anti-commons tragedy" of transitional management, lack of public sense of space responsibility, and weakened public nature. It is necessary to enhance the "publicness" of urban public green spaces through effective institutional arrangements, inclusive planning participation, and humane management measures, promoting urban public openness and the enhancement of multiple values.

Keywords: public green spaces, publicness, governance, institutional economics

Procedia PDF Downloads 52
5147 Audio-Visual Co-Data Processing Pipeline

Authors: Rita Chattopadhyay, Vivek Anand Thoutam

Abstract:

Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.

Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech

Procedia PDF Downloads 75
5146 Paper-Like and Battery Free Sensor Patches for Wound Monitoring

Authors: Xiaodi Su, Xin Ting Zheng, Laura Sutarlie, Nur Asinah binte Mohamed Salleh, Yong Yu

Abstract:

Wound healing is a dynamic process with multiple phases. Rapid profiling and quantitative characterization of inflammation and infection remain challenging. We have developed paper-like battery-free multiplexed sensors for holistic wound assessment via quantitative detection of multiple inflammation and infection markers. In one of the designs, the sensor patch consists of a wax-printed paper panel with five colorimetric sensor channels arranged in a pattern resembling a five-petaled flower (denoted as a ‘Petal’ sensor). The five sensors are for temperature, pH, trimethylamine, uric acid, and moisture. The sensor patch is sandwiched between a top transparent silicone layer and a bottom adhesive wound contact layer. In the second design, a palm-like-shaped paper strip is fabricated by a paper-cutter printer (denoted as ‘Palm’ sensor). This sensor strip carries five sensor regions connected by a stem sampling entrance that enables rapid colorimetric detection of multiple bacteria metabolites (aldehyde, lactate, moisture, trimethylamine, tryptophan) from wound exudate. For both the “\’ Petal’ and ‘Palm’ sensors, color images can be captured by a mobile phone. According to the color changes, one can quantify the concentration of the biomarkers and then determine wound healing status and identify/quantify bacterial species in infected wounds. The ‘Petal’ and ‘Palm’ sensors are validated with in-situ animal and ex-situ skin wound models, respectively. These sensors have the potential for integration with wound dressing to allow early warning of adverse events without frequent removal of the plasters. Such in-situ and early detection of non-healing condition can trigger immediate clinical intervention to facilitate wound care management.

Keywords: wound infection, colorimetric sensor, paper fluidic sensor, wound care

Procedia PDF Downloads 76
5145 Establishing Taiwan's Marine Space Planning System

Authors: Wen-Yan Chiau

Abstract:

Taiwan passed the 'Basic Ocean Act' in November 2019, and in accordance with Article 4 of its provisions, the government should draft a decree on ocean space planning (MSP). In the past few years, although Taiwan has passed the 'Coastal Zone Management Act' and the 'Spatial Planning Act', in the face of multiple use of marine areas, it still lacks a comprehensive marine area use blueprint and a fundamental mechanism for multi-purpose use planning management. In particular, Taiwan's active development of offshore wind power is facing this problem, and it is impossible to fully reconcile the use of each domain and the public welfare through a holistic system, highlighting the urgency of the establishment of MSP system. Therefore, this article will review relevant Taiwan laws and regulations, refer to important international initiatives and experiences, and participate in the exchange of practical experience in international conference(s), and propose adequate framework, principles, procedures, and promotion strategies on MSP. Possible solutions to promote sustainable and wise use in Taiwan's waters will also be suggested for comments.

Keywords: basic ocean act, coastal zone management act, marine spatial planning, spatial planning act, Taiwan

Procedia PDF Downloads 128
5144 Fashion as a Tool of Modernity and Female Empowerment in the Nineteenth-Century Zenana

Authors: Ira Solomatina

Abstract:

This paper looks at the role of fashion and clothes in the context of the late nineteenth-century Indian zenana. It suggests that fashion and clothes served as tools for self-assertion and empowerment among the zenana women, allowing them to negotiate between tradition and modernity and establish themselves as modern subjects. In pre-Independence India and in upper-class Indians households, zenana was women's part of the house, where women lived separately from men and in seclusion (purdah). To male colonial scholars and officials, zenana remained impenetrable, inviting speculations about the position of the zenana women. In the colonial imagination, the Indian woman was not only the helpless victim, oppressed by the Indian man but also the agent of deviant sexuality. Consequently, in the colonial British scholarship, zenana was portrayed as a space of idleness, perverse sexuality, ignorance, and illness. Contrary to the dominating ideas about zenana, some Western women writers presented more varied accounts of the zenana life, noting on the good education, dignified manners, and sophisticated fashion choices of the women in the zenana. Contemporary research by postcolonial scholars shows that zenana women in purdah travelled, had access to education and political power. The history of India has examples of women rulers in purdah and more than enough instances of zenana women influencing politics and culture. Zenana, in short, was not an ahistorical, dark realm of idleness but the space of culture and a space impacted by modernity. The paper proves that in the context of zenana, clothes, and fashion provided a visual vocabulary for the women to establish themselves as modern subjects and negotiate between modernity and tradition. To do so, it relies on photographs of zenana women and written accounts about and from the nineteenth-century zenana.

Keywords: woman's fashion, colonial India, modernity, zenana

Procedia PDF Downloads 146
5143 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection

Authors: S. Delgado, C. Cerrada, R. S. Gómez

Abstract:

This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.

Keywords: voxelization, GPU acceleration, computer graphics, compute shaders

Procedia PDF Downloads 67
5142 Solvent Dependent Triazole-Appended Glucofuranose-Based Fluorometric Sensor for Detection of Au³⁺ Ions

Authors: Samiul Islam Hazarika, Domngam Boje, Ananta Kumar Atta

Abstract:

It is well familiar that solvents play a significant role in modern chemistry. Solvents can change the reactivity and physicochemical properties of molecules in a solution. Keeping this in mind, we have designed and synthesized a mono-triazolyl-linked pyrenyl-appended xylofuranose derivative for the detection of metal ions with changing solvent systems. The incorporation of a sugar backbone in the sensor increases the water solubility and biocompatibility. The experimental study revealed that the xylofuranose-based fluorescence probe did not exhibit any specific selectivity towards metal ions in acetonitrile (CH₃CN) solvent. Whereas, we revealed that triazole-linked pyrenyl-appended xylofuranose-based fluorescent sensor would exhibit high selectivity and sensitivity towards Au³⁺ ions in CH₃CN-H₂O (1/1, v/v) system. This observation might be explained by the viscosity and polarity differences of CH₃CN and CH₃CN-H₂O solvent systems. The formation of the sensor-Au³⁺ complex was also established by high-resolution mass spectrometry (HRMS) data of the complex.

Keywords: triazole, furanose, fluorometric, solvent dependent

Procedia PDF Downloads 112
5141 Functional Nanomaterials for Environmental Applications

Authors: S. A. M. Sabrina, Gouget Lammel, Anne Chantal, Chazalviel, Jean Noël, Ozanam François, Etcheberry Arnaud, Tighlit Fatma Zohra, B. Samia, Gabouze Noureddine

Abstract:

The elaboration and characterization of hybrid nano materials give rise to considerable interest due to the new properties that arising. They are considered as an important category of new materials having innovative characteristics by combining the specific intrinsic properties of inorganic compounds (semiconductors) with the grafted organic species. This open the way to improved properties and spectacular applications in various and important fields, especially in the environment. In this work, nano materials based-semiconductors were elaborated by chemical route. The obtained surfaces were grafted with organic functional groups. The functionalization process was optimized in order to confer to the hybrid nano material a good stability as well as the right properties required for the subsequent applications. Different characterization techniques were used to investigate the resulting nano structures, such as SEM, UV-Visible, FTIR, Contact angle and electro chemical measurements. Finally, applications were envisaged in environmental area. The elaborated nano structures were tested for the detection and the elimination of pollutants.

Keywords: hybrid materials, porous silicon, peptide, metal detection

Procedia PDF Downloads 495
5140 Digi-Buddy: A Smart Cane with Artificial Intelligence and Real-Time Assistance

Authors: Amaladhithyan Krishnamoorthy, Ruvaitha Banu

Abstract:

Vision is considered as the most important sense in humans, without which leading a normal can be often difficult. There are many existing smart canes for visually impaired with obstacle detection using ultrasonic transducer to help them navigate. Though the basic smart cane increases the safety of the users, it does not help in filling the void of visual loss. This paper introduces the concept of Digi-Buddy which is an evolved smart cane for visually impaired. The cane consists for several modules, apart from the basic obstacle detection features; the Digi-Buddy assists the user by capturing video/images and streams them to the server using a wide-angled camera, which then detects the objects using Deep Convolutional Neural Network. In addition to determining what the particular image/object is, the distance of the object is assessed by the ultrasonic transducer. The sound generation application, modelled with the help of Natural Language Processing is used to convert the processed images/object into audio. The object detected is signified by its name which is transmitted to the user with the help of Bluetooth hear phones. The object detection is extended to facial recognition which maps the faces of the person the user meets in the database of face images and alerts the user about the person. One of other crucial function consists of an automatic-intimation-alarm which is triggered when the user is in an emergency. If the user recovers within a set time, a button is provisioned in the cane to stop the alarm. Else an automatic intimation is sent to friends and family about the whereabouts of the user using GPS. In addition to safety and security by the existing smart canes, the proposed concept devices to be implemented as a prototype helping visually-impaired visualize their surroundings through audio more in an amicable way.

Keywords: artificial intelligence, facial recognition, natural language processing, internet of things

Procedia PDF Downloads 349