Search results for: estimation algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3842

Search results for: estimation algorithms

92 The Role of Temples Redevelopment for Informal Sector Business Development in India

Authors: Prashant Gupta

Abstract:

Throughout India, temples have served as cultural centers, commerce hubs, art galleries, educational institutions, and social centers in addition to being places of worship since centuries. Across the country, there are over two million temples, which are crucial economic hubs, attracting devotees and tourists worldwide. In India, we have 53 temples per each 100,000 Indians. As per NSSO survey, the temple economy is worth about $40 billion and 2.32 per cent of GDP based on major temple’s survey, which only includes formal sector. It could be much larger as an actual estimation has not been done yet. In India, 43.1% of total economy represents informal sector. Over 10 billion domestic tourists visit to new destinations every year within India. Even 20 per cent of the 90 million foreign tourists visited Madurai and Mahabalipuram temples which became the most visited tourist spot in 2022. Recently the current central government in power have started revitalizing the ancient Indian civilization by reconstructing and beautifying the major temples of India i.e., Kashi Vishwanath Corridor, Mahakaleshwara Temple, Kedarnath, Ayodhya etc. The reason researcher chose Kashi as a case study because it is known as a Spiritual Capital of India, which is also the abode for the spread of Hinduism, Buddhism, Jainism and Sikkism, which are core Sanatan Dharmic practices. 17,800 Million INR Amount was spend to redevelop Kashi Vishwanath Corridor since 2019. RESEARCH OBJECTIVES 1. To assess historical contribution of temples in socio economic development and revival of Indic Civilization. 2. To examine the role of temples redevelopment for informal sector businesses. 3. To identify the sub-sectors of informal sector businesses 4. To identify products and services of informal businesses for investigation of marketing strategies and business development. PROPOSED METHODS AND PROCEDURES This study will follow a mixed approach, employing both qualitative and quantitative methods of research. To conduct the study, data will be collected from 500 informal business owners through structured questionnaire and interview instruments. The informal business owners will be selected using a systematic random sampling technique. In addition, documents from government offices of the last 10 years of tax collection will be reviewed to substantiate the study. To analyze the study, descriptive and econometric analysis techniques will be employed. EXPECTED CONTRIBUTION OF THE PROPOSED STUDY By studying the contribution of temple re-development on informal business creation and growth, the study will be beneficial to the informal business owners and the government. For the government, scientific and empirical evidence on the contribution of temple re-development for informal business creation and growth to give evidence the study will give based infrastructural development and boosting tax collection. For informal businesses, the study will give them a detailed insight on the nature of their business and the possible future growth potential of their business, and the alternative products and services supplying to their customers in the future. Studying informal businesses will help to identify the key products and services which are majorly profitable and possess potential to multiply and grow through correct product marketing strategies and business development.

Keywords: business development, informal sector businesses, services and products marketing, temple economics

Procedia PDF Downloads 81
91 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 81
90 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR

Authors: Ionut Vintu, Stefan Laible, Ruth Schulz

Abstract:

Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.

Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection

Procedia PDF Downloads 139
89 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 292
88 Application of MALDI-MS to Differentiate SARS-CoV-2 and Non-SARS-CoV-2 Symptomatic Infections in the Early and Late Phases of the Pandemic

Authors: Dmitriy Babenko, Sergey Yegorov, Ilya Korshukov, Aidana Sultanbekova, Valentina Barkhanskaya, Tatiana Bashirova, Yerzhan Zhunusov, Yevgeniya Li, Viktoriya Parakhina, Svetlana Kolesnichenko, Yeldar Baiken, Aruzhan Pralieva, Zhibek Zhumadilova, Matthew S. Miller, Gonzalo H. Hortelano, Anar Turmuhambetova, Antonella E. Chesca, Irina Kadyrova

Abstract:

Introduction: The rapidly evolving COVID-19 pandemic, along with the re-emergence of pathogens causing acute respiratory infections (ARI), has necessitated the development of novel diagnostic tools to differentiate various causes of ARI. MALDI-MS, due to its wide usage and affordability, has been proposed as a potential instrument for diagnosing SARS-CoV-2 versus non-SARS-CoV-2 ARI. The aim of this study was to investigate the potential of MALDI-MS in conjunction with a machine learning model to accurately distinguish between symptomatic infections caused by SARS-CoV-2 and non-SARS-CoV-2 during both the early and later phases of the pandemic. Furthermore, this study aimed to analyze mass spectrometry (MS) data obtained from nasal swabs of healthy individuals. Methods: We gathered mass spectra from 252 samples, comprising 108 SARS-CoV-2-positive samples obtained in 2020 (Covid 2020), 7 SARS-CoV- 2-positive samples obtained in 2023 (Covid 2023), 71 samples from symptomatic individuals without SARS-CoV-2 (Control non-Covid ARVI), and 66 samples from healthy individuals (Control healthy). All the samples were subjected to RT-PCR testing. For data analysis, we employed the caret R package to train and test seven machine-learning algorithms: C5.0, KNN, NB, RF, SVM-L, SVM-R, and XGBoost. We conducted a training process using a five-fold (outer) nested repeated (five times) ten-fold (inner) cross-validation with a randomized stratified splitting approach. Results: In this study, we utilized the Covid 2020 dataset as a case group and the non-Covid ARVI dataset as a control group to train and test various machine learning (ML) models. Among these models, XGBoost and SVM-R demonstrated the highest performance, with accuracy values of 0.97 [0.93, 0.97] and 0.95 [0.95; 0.97], specificity values of 0.86 [0.71; 0.93] and 0.86 [0.79; 0.87], and sensitivity values of 0.984 [0.984; 1.000] and 1.000 [0.968; 1.000], respectively. When examining the Covid 2023 dataset, the Naive Bayes model achieved the highest classification accuracy of 43%, while XGBoost and SVM-R achieved accuracies of 14%. For the healthy control dataset, the accuracy of the models ranged from 0.27 [0.24; 0.32] for k-nearest neighbors to 0.44 [0.41; 0.45] for the Support Vector Machine with a radial basis function kernel. Conclusion: Therefore, ML models trained on MALDI MS of nasopharyngeal swabs obtained from patients with Covid during the initial phase of the pandemic, as well as symptomatic non-Covid individuals, showed excellent classification performance, which aligns with the results of previous studies. However, when applied to swabs from healthy individuals and a limited sample of patients with Covid in the late phase of the pandemic, ML models exhibited lower classification accuracy.

Keywords: SARS-CoV-2, MALDI-TOF MS, ML models, nasopharyngeal swabs, classification

Procedia PDF Downloads 108
87 Multicenter Evaluation of the ACCESS Anti-HCV Assay on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis C Virus Antibody

Authors: Dan W. Rhodes, Juliane Hey, Magali Karagueuzian, Florianne Martinez, Yael Sandowski, Vanessa Roulet, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin

Abstract:

Background: Beckman Coulter, Inc. (BEC) has recently developed a fully automated second-generation anti-HCV test on a new immunoassay platform. The objective of this multicenter study conducted in Europe was to evaluate the performance of the ACCESS anti-HCV assay on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer as an aid in the diagnosis of HCV (Hepatitis C Virus) infection and as a screening test for blood and plasma donors. Methods: The clinical specificity of the ACCESS anti-HCV assay was determined using HCV antibody-negative samples from blood donors and hospitalized patients. Sample antibody status was determined by a CE-marked anti-HCV assay (Abbott ARCHITECTTM anti-HCV assay or Abbott PRISM HCV assay) with an additional confirmation method (Immunoblot testing with INNO-LIATM HCV Score - Fujirebio), if necessary, according to pre-determined testing algorithms. The clinical sensitivity was determined using known HCV antibody-positive samples, identified positive by Immunoblot testing with INNO-LIATM HCV Score - Fujirebio. HCV RNA PCR or genotyping was available on all Immunoblot positive samples for further characterization. The false initial reactive rate was determined on fresh samples from blood donors and hospitalized patients. Thirty (30) commercially available seroconversion panels were tested to assess the sensitivity for early detection of HCV infection. The study was conducted from November 2019 to March 2022. Three (3) external sites and one (1) internal site participated. Results: Clinical specificity (95% CI) was 99.7% (99.6 – 99.8%) on 5852 blood donors and 99.0% (98.4 – 99.4%) on 1527 hospitalized patient samples. There were 15 discrepant samples (positive on ACCESS anti-HCV assay and negative on both ARCHITECT and Immunoblot) observed with hospitalized patient samples, and of note, additional HCV RNA PCR results showed five (5) samples had positive HCV RNA PCR results despite the absence of HCV antibody detection by ARCHITECT and Immunoblot, suggesting a better sensitivity of the ACCESS anti-HCV assay with these five samples compared to the ARCHITECT and Immunoblot anti-HCV assays. Clinical sensitivity (95% CI) on 510 well-characterized, known HCV antibody-positive samples was 100.0% (99.3 – 100.0%), including 353 samples with known HCV genotypes (1 to 6). The overall false initial reactive rate (95% CI) on 6630 patient samples was 0.02% (0.00 – 0.09%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS anti-HCV assay had equivalent sensitivity performances, with an average bleed difference since the first reactive bleed below one (1), compared to the ARCHITECTTM anti-HCV assay. Conclusion: The newly developed ACCESS anti-HCV assay from BEC for use on the DxI 9000 ACCESS Immunoassay Analyzer demonstrated high clinical sensitivity and specificity, equivalent to currently marketed anti-HCV assays, as well as a low false initial reactive rate.

Keywords: DxI 9000 ACCESS Immunoassay Analyzer, HCV, HCV antibody, Hepatitis C virus, immunoassay

Procedia PDF Downloads 100
86 Seismic Perimeter Surveillance System (Virtual Fence) for Threat Detection and Characterization Using Multiple ML Based Trained Models in Weighted Ensemble Voting

Authors: Vivek Mahadev, Manoj Kumar, Neelu Mathur, Brahm Dutt Pandey

Abstract:

Perimeter guarding and protection of critical installations require prompt intrusion detection and assessment to take effective countermeasures. Currently, visual and electronic surveillance are the primary methods used for perimeter guarding. These methods can be costly and complicated, requiring careful planning according to the location and terrain. Moreover, these methods often struggle to detect stealthy and camouflaged insurgents. The object of the present work is to devise a surveillance technique using seismic sensors that overcomes the limitations of existing systems. The aim is to improve intrusion detection, assessment, and characterization by utilizing seismic sensors. Most of the similar systems have only two types of intrusion detection capability viz., human or vehicle. In our work we could even categorize further to identify types of intrusion activity such as walking, running, group walking, fence jumping, tunnel digging and vehicular movements. A virtual fence of 60 meters at GCNEP, Bahadurgarh, Haryana, India, was created by installing four underground geophones at a distance of 15 meters each. The signals received from these geophones are then processed to find unique seismic signatures called features. Various feature optimization and selection methodologies, such as LightGBM, Boruta, Random Forest, Logistics, Recursive Feature Elimination, Chi-2 and Pearson Ratio were used to identify the best features for training the machine learning models. The trained models were developed using algorithms such as supervised support vector machine (SVM) classifier, kNN, Decision Tree, Logistic Regression, Naïve Bayes, and Artificial Neural Networks. These models were then used to predict the category of events, employing weighted ensemble voting to analyze and combine their results. The models were trained with 1940 training events and results were evaluated with 831 test events. It was observed that using the weighted ensemble voting increased the efficiency of predictions. In this study we successfully developed and deployed the virtual fence using geophones. Since these sensors are passive, do not radiate any energy and are installed underground, it is impossible for intruders to locate and nullify them. Their flexibility, quick and easy installation, low costs, hidden deployment and unattended surveillance make such systems especially suitable for critical installations and remote facilities with difficult terrain. This work demonstrates the potential of utilizing seismic sensors for creating better perimeter guarding and protection systems using multiple machine learning models in weighted ensemble voting. In this study the virtual fence achieved an intruder detection efficiency of over 97%.

Keywords: geophone, seismic perimeter surveillance, machine learning, weighted ensemble method

Procedia PDF Downloads 79
85 Cyber-Med: Practical Detection Methodology of Cyber-Attacks Aimed at Medical Devices Eco-Systems

Authors: Nir Nissim, Erez Shalom, Tomer Lancewiki, Yuval Elovici, Yuval Shahar

Abstract:

Background: A Medical Device (MD) is an instrument, machine, implant, or similar device that includes a component intended for the purpose of the diagnosis, cure, treatment, or prevention of disease in humans or animals. Medical devices play increasingly important roles in health services eco-systems, including: (1) Patient Diagnostics and Monitoring; Medical Treatment and Surgery; and Patient Life Support Devices and Stabilizers. MDs are part of the medical device eco-system and are connected to the network, sending vital information to the internal medical information systems of medical centers that manage this data. Wireless components (e.g. Wi-Fi) are often embedded within medical devices, enabling doctors and technicians to control and configure them remotely. All these functionalities, roles, and uses of MDs make them attractive targets of cyber-attacks launched for many malicious goals; this trend is likely to significantly increase over the next several years, with increased awareness regarding MD vulnerabilities, the enhancement of potential attackers’ skills, and expanded use of medical devices. Significance: We propose to develop and implement Cyber-Med, a unique collaborative project of Ben-Gurion University of the Negev and the Clalit Health Services Health Maintenance Organization. Cyber-Med focuses on the development of a comprehensive detection framework that relies on a critical attack repository that we aim to create. Cyber-Med will allow researchers and companies to better understand the vulnerabilities and attacks associated with medical devices as well as providing a comprehensive platform for developing detection solutions. Methodology: The Cyber-Med detection framework will consist of two independent, but complementary detection approaches: one for known attacks, and the other for unknown attacks. These modules incorporate novel ideas and algorithms inspired by our team's domains of expertise, including cyber security, biomedical informatics, and advanced machine learning, and temporal data mining techniques. The establishment and maintenance of Cyber-Med’s up-to-date attack repository will strengthen the capabilities of Cyber-Med’s detection framework. Major Findings: Based on our initial survey, we have already found more than 15 types of vulnerabilities and possible attacks aimed at MDs and their eco-system. Many of these attacks target individual patients who use devices such pacemakers and insulin pumps. In addition, such attacks are also aimed at MDs that are widely used by medical centers such as MRIs, CTs, and dialysis engines; the information systems that store patient information; protocols such as DICOM; standards such as HL7; and medical information systems such as PACS. However, current detection tools, techniques, and solutions generally fail to detect both the known and unknown attacks launched against MDs. Very little research has been conducted in order to protect these devices from cyber-attacks, since most of the development and engineering efforts are aimed at the devices’ core medical functionality, the contribution to patients’ healthcare, and the business aspects associated with the medical device.

Keywords: medical device, cyber security, attack, detection, machine learning

Procedia PDF Downloads 357
84 Seismic Response Control of Multi-Span Bridge Using Magnetorheological Dampers

Authors: B. Neethu, Diptesh Das

Abstract:

The present study investigates the performance of a semi-active controller using magneto-rheological dampers (MR) for seismic response reduction of a multi-span bridge. The application of structural control to the structures during earthquake excitation involves numerous challenges such as proper formulation and selection of the control strategy, mathematical modeling of the system, uncertainty in system parameters and noisy measurements. These problems, however, need to be tackled in order to design and develop controllers which will efficiently perform in such complex systems. A control algorithm, which can accommodate un-certainty and imprecision compared to all the other algorithms mentioned so far, due to its inherent robustness and ability to cope with the parameter uncertainties and imprecisions, is the sliding mode algorithm. A sliding mode control algorithm is adopted in the present study due to its inherent stability and distinguished robustness to system parameter variation and external disturbances. In general a semi-active control scheme using an MR damper requires two nested controllers: (i) an overall system controller, which derives the control force required to be applied to the structure and (ii) an MR damper voltage controller which determines the voltage required to be supplied to the damper in order to generate the desired control force. In the present study a sliding mode algorithm is used to determine the desired optimal force. The function of the voltage controller is to command the damper to produce the desired force. The clipped optimal algorithm is used to find the command voltage supplied to the MR damper which is regulated by a semi active control law based on sliding mode algorithm. The main objective of the study is to propose a robust semi active control which can effectively control the responses of the bridge under real earthquake ground motions. Lumped mass model of the bridge is developed and time history analysis is carried out by solving the governing equations of motion in the state space form. The effectiveness of MR dampers is studied by analytical simulations by subjecting the bridge to real earthquake records. In this regard, it may also be noted that the performance of controllers depends, to a great extent, on the characteristics of the input ground motions. Therefore, in order to study the robustness of the controller in the present study, the performance of the controllers have been investigated for fourteen different earthquake ground motion records. The earthquakes are chosen in such a way that all possible characteristic variations can be accommodated. Out of these fourteen earthquakes, seven are near-field and seven are far-field. Also, these earthquakes are divided into different frequency contents, viz, low-frequency, medium-frequency, and high-frequency earthquakes. The responses of the controlled bridge are compared with the responses of the corresponding uncontrolled bridge (i.e., the bridge without any control devices). The results of the numerical study show that the sliding mode based semi-active control strategy can substantially reduce the seismic responses of the bridge showing a stable and robust performance for all the earthquakes.

Keywords: bridge, semi active control, sliding mode control, MR damper

Procedia PDF Downloads 124
83 ExactData Smart Tool For Marketing Analysis

Authors: Aleksandra Jonas, Aleksandra Gronowska, Maciej Ścigacz, Szymon Jadczak

Abstract:

Exact Data is a smart tool which helps with meaningful marketing content creation. It helps marketers achieve this by analyzing the text of an advertisement before and after its publication on social media sites like Facebook or Instagram. In our research we focus on four areas of natural language processing (NLP): grammar correction, sentiment analysis, irony detection and advertisement interpretation. Our research has identified a considerable lack of NLP tools for the Polish language, which specifically aid online marketers. In light of this, our research team has set out to create a robust and versatile NLP tool for the Polish language. The primary objective of our research is to develop a tool that can perform a range of language processing tasks in this language, such as sentiment analysis, text classification, text correction and text interpretation. Our team has been working diligently to create a tool that is accurate, reliable, and adaptable to the specific linguistic features of Polish, and that can provide valuable insights for a wide range of marketers needs. In addition to the Polish language version, we are also developing an English version of the tool, which will enable us to expand the reach and impact of our research to a wider audience. Another area of focus in our research involves tackling the challenge of the limited availability of linguistically diverse corpora for non-English languages, which presents a significant barrier in the development of NLP applications. One approach we have been pursuing is the translation of existing English corpora, which would enable us to use the wealth of linguistic resources available in English for other languages. Furthermore, we are looking into other methods, such as gathering language samples from social media platforms. By analyzing the language used in social media posts, we can collect a wide range of data that reflects the unique linguistic characteristics of specific regions and communities, which can then be used to enhance the accuracy and performance of NLP algorithms for non-English languages. In doing so, we hope to broaden the scope and capabilities of NLP applications. Our research focuses on several key NLP techniques including sentiment analysis, text classification, text interpretation and text correction. To ensure that we can achieve the best possible performance for these techniques, we are evaluating and comparing different approaches and strategies for implementing them. We are exploring a range of different methods, including transformers and convolutional neural networks (CNNs), to determine which ones are most effective for different types of NLP tasks. By analyzing the strengths and weaknesses of each approach, we can identify the most effective techniques for specific use cases, and further enhance the performance of our tool. Our research aims to create a tool, which can provide a comprehensive analysis of advertising effectiveness, allowing marketers to identify areas for improvement and optimize their advertising strategies. The results of this study suggest that a smart tool for advertisement analysis can provide valuable insights for businesses seeking to create effective advertising campaigns.

Keywords: NLP, AI, IT, language, marketing, analysis

Procedia PDF Downloads 86
82 Shear Strength Characterization of Coal Mine Spoil in Very-High Dumps with Large Scale Direct Shear Testing

Authors: Leonie Bradfield, Stephen Fityus, John Simmons

Abstract:

The shearing behavior of current and planned coal mine spoil dumps up to 400m in height is studied using large-sample-high-stress direct shear tests performed on a range of spoils common to the coalfields of Eastern Australia. The motivation for the study is to address industry concerns that some constructed spoil dump heights ( > 350m) are exceeding the scale ( ≤ 120m) for which reliable design information exists, and because modern geotechnical laboratories are not equipped to test representative spoil specimens at field-scale stresses. For more than two decades, shear strength estimation for spoil dumps has been based on either infrequent, very small-scale tests where oversize particles are scalped to comply with device specimen size capacity such that the influence of prototype-sized particles on shear strength is not captured; or on published guidelines that provide linear shear strength envelopes derived from small-scale test data and verified in practice by slope performance of dumps up to 120m in height. To date, these published guidelines appear to have been reliable. However, in the field of rockfill dam design there is a broad acceptance of a curvilinear shear strength envelope, and if this is applicable to coal mine spoils, then these industry-accepted guidelines may overestimate the strength and stability of dumps at higher stress levels. The pressing need to rationally define the shearing behavior of more representative spoil specimens at field-scale stresses led to the successful design, construction and operation of a large direct shear machine (LDSM) and its subsequent application to provide reliable design information for current and planned very-high dumps. The LDSM can test at a much larger scale, in terms of combined specimen size (720mm x 720mm x 600mm) and stress (σn up to 4.6MPa), than has ever previously been achieved using a direct shear machine for geotechnical testing of rockfill. The results of an extensive LDSM testing program on a wide range of coal-mine spoils are compared to a published framework that widely accepted by the Australian coal mining industry as the standard for shear strength characterization of mine spoil. A critical outcome is that the LDSM data highlights several non-compliant spoils, and stress-dependent shearing behavior, for which the correct application of the published framework will not provide reliable shear strength parameters for design. Shear strength envelopes developed from the LDSM data are also compared with dam engineering knowledge, where failure envelopes of rockfills are curved in a concave-down manner. The LDSM data indicates that shear strength envelopes for coal-mine spoils abundant with rock fragments are not in fact curved and that the shape of the failure envelope is ultimately determined by the strength of rock fragments. Curvilinear failure envelopes were found to be appropriate for soil-like spoils containing minor or no rock fragments, or hard-soil aggregates.

Keywords: coal mine, direct shear test, high dump, large scale, mine spoil, shear strength, spoil dump

Procedia PDF Downloads 161
81 Detection of Patient Roll-Over Using High-Sensitivity Pressure Sensors

Authors: Keita Nishio, Takashi Kaburagi, Yosuke Kurihara

Abstract:

Recent advances in medical technology have served to enhance average life expectancy. However, the total time for which the patients are prescribed complete bedrest has also increased. With patients being required to maintain a constant lying posture- also called bedsore- development of a system to detect patient roll-over becomes imperative. For this purpose, extant studies have proposed the use of cameras, and favorable results have been reported. Continuous on-camera monitoring, however, tends to violate patient privacy. We have proposed unconstrained bio-signal measurement system that could detect body-motion during sleep and does not violate patient’s privacy. Therefore, in this study, we propose a roll-over detection method by the date obtained from the bi-signal measurement system. Signals recorded by the sensor were assumed to comprise respiration, pulse, body motion, and noise components. Compared the body-motion and respiration, pulse component, the body-motion, during roll-over, generate large vibration. Thus, analysis of the body-motion component facilitates detection of the roll-over tendency. The large vibration associated with the roll-over motion has a great effect on the Root Mean Square (RMS) value of time series of the body motion component calculated during short 10 s segments. After calculation, the RMS value during each segment was compared to a threshold value set in advance. If RMS value in any segment exceeded the threshold, corresponding data were considered to indicate occurrence of a roll-over. In order to validate the proposed method, we conducted experiment. A bi-directional microphone was adopted as a high-sensitivity pressure sensor and was placed between the mattress and bedframe. Recorded signals passed through an analog Band-pass Filter (BPF) operating over the 0.16-16 Hz bandwidth. BPF allowed the respiration, pulse, and body-motion to pass whilst removing the noise component. Output from BPF was A/D converted with the sampling frequency 100Hz, and the measurement time was 480 seconds. The number of subjects and data corresponded to 5 and 10, respectively. Subjects laid on a mattress in the supine position. During data measurement, subjects—upon the investigator's instruction—were asked to roll over into four different positions—supine to left lateral, left lateral to prone, prone to right lateral, and right lateral to supine. Recorded data was divided into 48 segments with 10 s intervals, and the corresponding RMS value for each segment was calculated. The system was evaluated by the accuracy between the investigator’s instruction and the detected segment. As the result, an accuracy of 100% was achieved. While reviewing the time series of recorded data, segments indicating roll-over tendencies were observed to demonstrate a large amplitude. However, clear differences between decubitus and the roll-over motion could not be confirmed. Extant researches possessed a disadvantage in terms of patient privacy. The proposed study, however, demonstrates more precise detection of patient roll-over tendencies without violating their privacy. As a future prospect, decubitus estimation before and after roll-over could be attempted. Since in this paper, we could not confirm the clear differences between decubitus and the roll-over motion, future studies could be based on utilization of the respiration and pulse components.

Keywords: bedsore, high-sensitivity pressure sensor, roll-over, unconstrained bio-signal measurement

Procedia PDF Downloads 121
80 Effect of Land Use and Abandonment on Soil Carbon and Nitrogen Depletion by Runoff in Shallow Soils under Semi-Arid Mediterranean Climate

Authors: Mohamed Emran, Giovanni Pardini, Maria Gispert, Mohamed Rashad

Abstract:

Land use and abandonment in semi-arid degraded ecosystems may cause regressive dynamics in vegetation cover affecting organic matter contents, soil nutrients and structural stability, thus reducing soil resistance to erosion. Mediterranean areas are generally subjected to climatic fluctuations, which modify soil conditions and hydrological processes, such as runoff and water infiltration within the upper soil horizons. Low erosion rates occur in very fragile and shallow soils with minor clay content progressively decrease organic carbon C and nitrogen N pools in the upper soil horizons. Seven soils were selected representing variant context of land use and abandonment at the Cap de Creus Peninsula, Catalonia, NE Spain, from recent cultivated vines and olive groves, mid abandoned forests standing under cork and pine trees, pasture to late abandoned Cistus and Erica scrubs. The aim of this work was to study the effect of changes in land use and abandonment on the depletion of soil organic carbon and nitrogen transported by runoff water in shallow soils after natural rainfall events during two years with different rainfall patterns (1st year with low rainfall and 2nd year with high rainfall) by i) monitoring the most significant soil erosion parameters at recorded rainfall events, ii) studying the most relevant soil physical and chemical characteristics on seasonal basis and iii) analysing the seasonal trends of depleted carbon and nitrogen and their interaction with soil surface compaction parameters. Significant seasonal variability was observed in the relevant soil physical and chemical parameters and soil erosion parameters in all soils to establish their evolution under land use and abandonment during two years of different rainfall patterns (214 and 487 mm per year), giving important indications on soil response to rainfall impacts. Erosion rates decreased significantly with the increasing of soil C and N under low and high rainfall. In cultivated soils, C and N depletion increased by 144% and 115%, respectively by 13% increase in erosion rates during the 1st year with respect to the 2nd year. Depleted C and N were proportionally higher in soils under vines and olive with vulnerable soil structure and low soil resilience leading to degradation, altering nutrients cycles and causing adverse impact on environmental quality. Statistical analysis underlined that, during the 1st year, soil surface was less effective in preserving stocks of organic resources leading to higher susceptibility to erosion with consequent C and N depletion. During the 2nd year, higher organic reserve and water storage occurred despite the increasing of C and N loss with an effective contribution from soil surface compaction parameters. The overall estimation during the two years indicated clear differences among soils under vines, olive, cork and pines, suggesting on the one hand, that current cultivation practices are inappropriate and that reforestation with pines may delay the achievement of better soil conditions. On the other hand, the natural succession of vegetation under Cistus, pasture and Erica suggests the recovery of good soil conditions.

Keywords: land abandonment, land use, nutrient's depletion, soil erosion

Procedia PDF Downloads 347
79 Enhancing Residential Architecture through Generative Design: Balancing Aesthetics, Legal Constraints, and Environmental Considerations

Authors: Milena Nanova, Radul Shishkov, Martin Georgiev, Damyan Damov

Abstract:

This research paper presents an in-depth exploration of the use of generative design in urban residential architecture, with a dual focus on aligning aesthetic values with legal and environmental constraints. The study aims to demonstrate how generative design methodologies can innovate residential building designs that are not only legally compliant and environmentally conscious but also aesthetically compelling. At the core of our research is a specially developed generative design framework tailored for urban residential settings. This framework employs computational algorithms to produce diverse design solutions, meticulously balancing aesthetic appeal with practical considerations. By integrating site-specific features, urban legal restrictions, and environmental factors, our approach generates designs that resonate with the unique character of urban landscapes while adhering to regulatory frameworks. The paper explores how modern digital tools, particularly computational design, and algorithmic modelling, can optimize the early stages of residential building design. By creating a basic parametric model of a residential district, the paper investigates how automated design tools can explore multiple design variants based on predefined parameters (e.g., building cost, dimensions, orientation) and constraints. The paper aims to demonstrate how these tools can rapidly generate and refine architectural solutions that meet the required criteria for quality of life, cost efficiency, and functionality. The study utilizes computational design for database processing and algorithmic modelling within the fields of applied geodesy and architecture. It focuses on optimizing the forms of residential development by adjusting specific parameters and constraints. The results of multiple iterations are analysed, refined, and selected based on their alignment with predefined quality and cost criteria. The findings of this research will contribute to a modern, complex approach to residential area design. The paper demonstrates the potential for integrating BIM models into the design process and their application in virtual 3D Geographic Information Systems (GIS) environments. The study also examines the transformation of BIM models into suitable 3D GIS file formats, such as CityGML, to facilitate the visualization and evaluation of urban planning solutions. In conclusion, our research demonstrates that a generative parametric approach based on real geodesic data and collaborative decision-making could be introduced in the early phases of the design process. This gives the designers powerful tools to explore diverse design possibilities, significantly improving the qualities of the investment during its entire lifecycle.

Keywords: architectural design, residential buildings, urban development, geodesic data, generative design, parametric models, workflow optimization

Procedia PDF Downloads 12
78 Unleashing the Power of Cerebrospinal System for a Better Computer Architecture

Authors: Lakshmi N. Reddi, Akanksha Varma Sagi

Abstract:

Studies on biomimetics are largely developed, deriving inspiration from natural processes in our objective world to develop novel technologies. Recent studies are diverse in nature, making their categorization quite challenging. Based on an exhaustive survey, we developed categorizations based on either the essential elements of nature - air, water, land, fire, and space, or on form/shape, functionality, and process. Such diverse studies as aircraft wings inspired by bird wings, a self-cleaning coating inspired by a lotus petal, wetsuits inspired by beaver fur, and search algorithms inspired by arboreal ant path networks lend themselves to these categorizations. Our categorizations of biomimetic studies allowed us to define a different dimension of biomimetics. This new dimension is not restricted to inspiration from the objective world. It is based on the premise that the biological processes observed in the objective world find their reflections in our human bodies in a variety of ways. For example, the lungs provide the most efficient example for liquid-gas phase exchange, the heart exemplifies a very efficient pumping and circulatory system, and the kidneys epitomize the most effective cleaning system. The main focus of this paper is to bring out the magnificence of the cerebro-spinal system (CSS) insofar as it relates to our current computer architecture. In particular, the paper uses four key measures to analyze the differences between CSS and human- engineered computational systems. These are adaptability, sustainability, energy efficiency, and resilience. We found that the cerebrospinal system reveals some important challenges in the development and evolution of our current computer architectures. In particular, the myriad ways in which the CSS is integrated with other systems/processes (circulatory, respiration, etc) offer useful insights on how the human-engineered computational systems could be made more sustainable, energy-efficient, resilient, and adaptable. In our paper, we highlight the energy consumption differences between CSS and our current computational designs. Apart from the obvious differences in materials used between the two, the systemic nature of how CSS functions provides clues to enhance life-cycles of our current computational systems. The rapid formation and changes in the physiology of dendritic spines and their synaptic plasticity causing memory changes (ex., long-term potentiation and long-term depression) allowed us to formulate differences in the adaptability and resilience of CSS. In addition, the CSS is sustained by integrative functions of various organs, and its robustness comes from its interdependence with the circulatory system. The paper documents and analyzes quantifiable differences between the two in terms of the four measures. Our analyses point out the possibilities in the development of computational systems that are more adaptable, sustainable, energy efficient, and resilient. It concludes with the potential approaches for technological advancement through creation of more interconnected and interdependent systems to replicate the effective operation of cerebro-spinal system.

Keywords: cerebrospinal system, computer architecture, adaptability, sustainability, resilience, energy efficiency

Procedia PDF Downloads 99
77 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning

Authors: Ali Kazemi

Abstract:

The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.

Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis

Procedia PDF Downloads 57
76 Mapping of Urban Micro-Climate in Lyon (France) by Integrating Complementary Predictors at Different Scales into Multiple Linear Regression Models

Authors: Lucille Alonso, Florent Renard

Abstract:

The characterizations of urban heat island (UHI) and their interactions with climate change and urban climates are the main research and public health issue, due to the increasing urbanization of the population. These solutions require a better knowledge of the UHI and micro-climate in urban areas, by combining measurements and modelling. This study is part of this topic by evaluating microclimatic conditions in dense urban areas in the Lyon Metropolitan Area (France) using a combination of data traditionally used such as topography, but also from LiDAR (Light Detection And Ranging) data, Landsat 8 satellite observation and Sentinel and ground measurements by bike. These bicycle-dependent weather data collections are used to build the database of the variable to be modelled, the air temperature, over Lyon’s hyper-center. This study aims to model the air temperature, measured during 6 mobile campaigns in Lyon in clear weather, using multiple linear regressions based on 33 explanatory variables. They are of various categories such as meteorological parameters from remote sensing, topographic variables, vegetation indices, the presence of water, humidity, bare soil, buildings, radiation, urban morphology or proximity and density to various land uses (water surfaces, vegetation, bare soil, etc.). The acquisition sources are multiple and come from the Landsat 8 and Sentinel satellites, LiDAR points, and cartographic products downloaded from an open data platform in Greater Lyon. Regarding the presence of low, medium, and high vegetation, the presence of buildings and ground, several buffers close to these factors were tested (5, 10, 20, 25, 50, 100, 200 and 500m). The buffers with the best linear correlations with air temperature for ground are 5m around the measurement points, for low and medium vegetation, and for building 50m and for high vegetation is 100m. The explanatory model of the dependent variable is obtained by multiple linear regression of the remaining explanatory variables (Pearson correlation matrix with a |r| < 0.7 and VIF with < 5) by integrating a stepwise sorting algorithm. Moreover, holdout cross-validation is performed, due to its ability to detect over-fitting of multiple regression, although multiple regression provides internal validation and randomization (80% training, 20% testing). Multiple linear regression explained, on average, 72% of the variance for the study days, with an average RMSE of only 0.20°C. The impact on the model of surface temperature in the estimation of air temperature is the most important variable. Other variables are recurrent such as distance to subway stations, distance to water areas, NDVI, digital elevation model, sky view factor, average vegetation density, or building density. Changing urban morphology influences the city's thermal patterns. The thermal atmosphere in dense urban areas can only be analysed on a microscale to be able to consider the local impact of trees, streets, and buildings. There is currently no network of fixed weather stations sufficiently deployed in central Lyon and most major urban areas. Therefore, it is necessary to use mobile measurements, followed by modelling to characterize the city's multiple thermal environments.

Keywords: air temperature, LIDAR, multiple linear regression, surface temperature, urban heat island

Procedia PDF Downloads 137
75 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer

Authors: Binder Hans

Abstract:

Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.

Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas

Procedia PDF Downloads 148
74 Development and Implementation of An "Electric Island" Monitoring Infrastructure for Promoting Energy Efficiency in Schools

Authors: Vladislav Grigorovitch, Marina Grigorovitch, David Pearlmutter, Erez Gal

Abstract:

The concept of “electric island” is involved with achieving the balance between the self-power generation ability of each educational institution and energy consumption demand. Photo-Voltaic (PV) solar system installed on the roofs of educational buildings is a common way to absorb the available solar energy and generate electricity for self-consumption and even for returning to the grid. The main objective of this research is to develop and implement an “electric island” monitoring infrastructure for promoting energy efficiency in educational buildings. A microscale monitoring methodology will be developed to provide a platform to estimate energy consumption performance classified by rooms and subspaces rather than the more common macroscale monitoring of the whole building. The monitoring platform will be established on the experimental sites, enabling an estimation and further analysis of the variety of environmental and physical conditions. For each building, separate measurement configurations will be applied taking into account the specific requirements, restrictions, location and infrastructure issues. The direct results of the measurements will be analyzed to provide deeper understanding of the impact of environmental conditions and sustainability construction standards, not only on the energy demand of public building, but also on the energy consumption habits of the children that study in those schools and the educational and administrative staff that is responsible for providing the thermal comfort conditions and healthy studying atmosphere for the children. A monitoring methodology being developed in this research is providing online access to real-time data of Interferential Therapy (IFTs) from any mobile phone or computer by simply browsing the dedicated website, providing powerful tools for policy makers for better decision making while developing PV production infrastructure to achieve “electric islands” in educational buildings. A detailed measurement configuration was technically designed based on the specific conditions and restriction of each of the pilot buildings. A monitoring and analysis methodology includes a large variety of environmental parameters inside and outside the schools to investigate the impact of environmental conditions both on the energy performance of the school and educational abilities of the children. Indoor measurements are mandatory to acquire the energy consumption data, temperature, humidity, carbon dioxide and other air quality conditions in different parts of the building. In addition to that, we aim to study the awareness of the users to the energy consideration and thus the impact on their energy consumption habits. The monitoring of outdoor conditions is vital for proper design of the off-grid energy supply system and validation of its sufficient capacity. The suggested outcomes of this research include: 1. both experimental sites are designed to have PV production and storage capabilities; 2. Developing an online information feedback platform. The platform will provide consumer dedicated information to academic researchers, municipality officials and educational staff and students; 3. Designing an environmental work path for educational staff regarding optimal conditions and efficient hours for operating air conditioning, natural ventilation, closing of blinds, etc.

Keywords: sustainability, electric island, IOT, smart building

Procedia PDF Downloads 179
73 ESRA: An End-to-End System for Re-identification and Anonymization of Swiss Court Decisions

Authors: Joel Niklaus, Matthias Sturmer

Abstract:

The publication of judicial proceedings is a cornerstone of many democracies. It enables the court system to be made accountable by ensuring that justice is made in accordance with the laws. Equally important is privacy, as a fundamental human right (Article 12 in the Declaration of Human Rights). Therefore, it is important that the parties (especially minors, victims, or witnesses) involved in these court decisions be anonymized securely. Today, the anonymization of court decisions in Switzerland is performed either manually or semi-automatically using primitive software. While much research has been conducted on anonymization for tabular data, the literature on anonymization for unstructured text documents is thin and virtually non-existent for court decisions. In 2019, it has been shown that manual anonymization is not secure enough. In 21 of 25 attempted Swiss federal court decisions related to pharmaceutical companies, pharmaceuticals, and legal parties involved could be manually re-identified. This was achieved by linking the decisions with external databases using regular expressions. An automated re-identification system serves as an automated test for the safety of existing anonymizations and thus promotes the right to privacy. Manual anonymization is very expensive (recurring annual costs of over CHF 20M in Switzerland alone, according to an estimation). Consequently, many Swiss courts only publish a fraction of their decisions. An automated anonymization system reduces these costs substantially, further leading to more capacity for publishing court decisions much more comprehensively. For the re-identification system, topic modeling with latent dirichlet allocation is used to cluster an amount of over 500K Swiss court decisions into meaningful related categories. A comprehensive knowledge base with publicly available data (such as social media, newspapers, government documents, geographical information systems, business registers, online address books, obituary portal, web archive, etc.) is constructed to serve as an information hub for re-identifications. For the actual re-identification, a general-purpose language model is fine-tuned on the respective part of the knowledge base for each category of court decisions separately. The input to the model is the court decision to be re-identified, and the output is a probability distribution over named entities constituting possible re-identifications. For the anonymization system, named entity recognition (NER) is used to recognize the tokens that need to be anonymized. Since the focus lies on Swiss court decisions in German, a corpus for Swiss legal texts will be built for training the NER model. The recognized named entities are replaced by the category determined by the NER model and an identifier to preserve context. This work is part of an ongoing research project conducted by an interdisciplinary research consortium. Both a legal analysis and the implementation of the proposed system design ESRA will be performed within the next three years. This study introduces the system design of ESRA, an end-to-end system for re-identification and anonymization of Swiss court decisions. Firstly, the re-identification system tests the safety of existing anonymizations and thus promotes privacy. Secondly, the anonymization system substantially reduces the costs of manual anonymization of court decisions and thus introduces a more comprehensive publication practice.

Keywords: artificial intelligence, courts, legal tech, named entity recognition, natural language processing, ·privacy, topic modeling

Procedia PDF Downloads 148
72 Study of the Diaphragm Flexibility Effect on the Inelastic Seismic Response of Thin Wall Reinforced Concrete Buildings (TWRCB): A Purpose to Reduce the Uncertainty in the Vulnerability Estimation

Authors: A. Zapata, Orlando Arroyo, R. Bonett

Abstract:

Over the last two decades, the growing demand for housing in Latin American countries has led to the development of construction projects based on low and medium-rise buildings with thin reinforced concrete walls. This system, known as Thin Walls Reinforced Concrete Buildings (TWRCB), uses walls with thicknesses from 100 to 150 millimetres, with flexural reinforcement formed by welded wire mesh (WWM) with diameters between 5 and 7 millimetres, arranged in one or two layers. These walls often have irregular structural configurations, including combinations of rectangular shapes. Experimental and numerical research conducted in regions where this structural system is commonplace indicates inherent weaknesses, such as limited ductility due to the WWM reinforcement and thin element dimensions. Because of its complexity, numerical analyses have relied on two-dimensional models that don't explicitly account for the floor system, even though it plays a crucial role in distributing seismic forces among the resilient elements. Nonetheless, the numerical analyses assume a rigid diaphragm hypothesis. For this purpose, two study cases of buildings were selected, low-rise and mid-rise characteristics of TWRCB in Colombia. The buildings were analyzed in Opensees using the MVLEM-3D for walls and shell elements to simulate the slabs to involve the effect of coupling diaphragm in the nonlinear behaviour. Three cases are considered: a) models without a slab, b) models with rigid slabs, and c) models with flexible slabs. An incremental static (pushover) and nonlinear dynamic analyses were carried out using a set of 44 far-field ground motions of the FEMA P-695, scaled to 1.0 and 1.5 factors to consider the probability of collapse for the design base earthquake (DBE) and the maximum considered earthquake (MCE) for the model, according to the location sites and hazard zone of the archetypes in the Colombian NSR-10. Shear base capacity, maximum displacement at the roof, walls shear base individual demands and probabilities of collapse were calculated, to evaluate the effect of absence, rigid and flexible slabs in the nonlinear behaviour of the archetype buildings. The pushover results show that the building exhibits an overstrength between 1.1 to 2 when the slab is considered explicitly and depends on the structural walls plan configuration; additionally, the nonlinear behaviour considering no slab is more conservative than if the slab is represented. Include the flexible slab in the analysis remarks the importance to consider the slab contribution in the shear forces distribution between structural elements according to design resistance and rigidity. The dynamic analysis revealed that including the slab reduces the collapse probability of this system due to have lower displacements and deformations, enhancing the safety of residents and the seismic performance. The strategy of including the slab in modelling is important to capture the real effect on the distribution shear forces in walls due to coupling to estimate the correct nonlinear behaviour in this system and the adequate distribution to proportionate the correct resistance and rigidity of the elements in the design to reduce the possibility of damage to the elements during an earthquake.

Keywords: thin wall reinforced concrete buildings, coupling slab, rigid diaphragm, flexible diaphragm

Procedia PDF Downloads 75
71 Selected Macrophyte Populations Promotes Coupled Nitrification and Denitrification Function in Eutrophic Urban Wetland Ecosystem

Authors: Rupak Kumar Sarma, Ratul Saikia

Abstract:

Macrophytes encompass major functional group in eutrophic wetland ecosystems. As a key functional element of freshwater lakes, they play a crucial role in regulating various wetland biogeochemical cycles, as well as maintain the biodiversity at the ecosystem level. The high carbon-rich underground biomass of macrophyte populations may harbour diverse microbial community having significant potential in maintaining different biogeochemical cycles. The present investigation was designed to study the macrophyte-microbe interaction in coupled nitrification and denitrification, considering Deepor Beel Lake (a Ramsar conservation site) of North East India as a model eutrophic system. Highly eutrophic sites of Deepor Beel were selected based on sediment oxygen demand and inorganic phosphorus and nitrogen (P&N) concentration. Sediment redox potential and depth of the lake was chosen as the benchmark for collecting the plant and sediment samples. The average highest depth in winter (January 2016) and summer (July 2016) were recorded as 20ft (6.096m) and 35ft (10.668m) respectively. Both sampling depth and sampling seasons had the distinct effect on variation in macrophyte community composition. Overall, the dominant macrophytic populations in the lake were Nymphaea alba, Hydrilla verticillata, Utricularia flexuosa, Vallisneria spiralis, Najas indica, Monochoria hastaefolia, Trapa bispinosa, Ipomea fistulosa, Hygrorhiza aristata, Polygonum hydropiper, Eichhornia crassipes and Euryale ferox. There was a distinct correlation in the variation of major sediment physicochemical parameters with change in macrophyte community compositions. Quantitative estimation revealed an almost even accumulation of nitrate and nitrite in the sediment samples dominated by the plant species Eichhornia crassipes, Nymphaea alba, Hydrilla verticillata, Vallisneria spiralis, Euryale ferox and Monochoria hastaefolia, which might have signified a stable nitrification and denitrification process in the sites dominated by the selected aquatic plants. This was further examined by a systematic analysis of microbial populations through culture dependent and independent approach. Culture-dependent bacterial community study revealed the higher population of nitrifiers and denitrifiers in the sediment samples dominated by the six macrophyte species. However, culture-independent study with bacterial 16S rDNA V3-V4 metagenome sequencing revealed the overall similar type of bacterial phylum in all the sediment samples collected during the study. Thus, there might be the possibility of uneven distribution of nitrifying and denitrifying molecular markers among the sediment samples collected during the investigation. The diversity and abundance of the nitrifying and denitrifying molecular markers in the sediment samples are under investigation. Thus, the role of different aquatic plant functional types in microorganism mediated nitrogen cycle coupling could be screened out further from the present initial investigation.

Keywords: denitrification, macrophyte, metagenome, microorganism, nitrification

Procedia PDF Downloads 174
70 Influence of Ride Control Systems on the Motions Response and Passenger Comfort of High-Speed Catamarans in Irregular Waves

Authors: Ehsan Javanmardemamgheisi, Javad Mehr, Jason Ali-Lavroff, Damien Holloway, Michael Davis

Abstract:

During the last decades, a growing interest in faster and more efficient waterborne transportation has led to the development of high-speed vessels for both commercial and military applications. To satisfy this global demand, a wide variety of arrangements of high-speed crafts have been proposed by designers. Among them, high-speed catamarans have proven themselves to be a suitable Roll-on/Roll-off configuration for carrying passengers and cargo due to widely spaced demi hulls, a wide deck zone, and a high ratio of deadweight to displacement. To improve passenger comfort and crew workability and enhance the operability and performance of high-speed catamarans, mitigating the severity of motions and structural loads using Ride Control Systems (RCS) is essential.In this paper, a set of towing tank tests was conducted on a 2.5 m scaled model of a 112 m Incat Tasmania high-speed catamaran in irregular head seas to investigate the effect of different ride control algorithms including linear and nonlinear versions of the heave control, pitch control, and local control on motion responses and passenger comfort of the full-scale ship. The RCS included a centre bow-fitted T-Foil and two transom-mounted stern tabs. All the experiments were conducted at the Australian Maritime College (AMC) towing tank at a model speed of 2.89 m/s (37 knots full scale), a modal period of 1.5 sec (10 sec full scale) and two significant wave heights of 60 mm and 90 mm, representing full-scale wave heights of 2.7 m and 4 m, respectively. Spectral analyses were performed using Welch’s power spectral density method on the vertical motion time records of the catamaran model to calculate heave and pitch Response Amplitude Operators (RAOs). Then, noting that passenger discomfort arises from vertical accelerations and that the vertical accelerations vary at different longitudinal locations within the passenger cabin due to the variations in amplitude and relative phase of the pitch and heave motions, the vertical accelerations were calculated at three longitudinal locations (LCG, T-Foil, and stern tabs). Finally, frequency-weighted Root Mean Square (RMS) vertical accelerations were calculated to estimate Motion Sickness Dose Value (MSDV) of the ship based on ISO 2631-recommendations. It was demonstrated that in small seas, implementing a nonlinear pitch control algorithm reduces the peak pitch motions by 41%, the vertical accelerations at the forward location by 46%, and motion sickness at the forward position by around 20% which provides great potential for further improvement in passenger comfort, crew workability, and operability of high-speed catamarans.

Keywords: high-speed catamarans, ride control system, response amplitude operators, vertical accelerations, motion sickness, irregular waves, towing tank tests.

Procedia PDF Downloads 84
69 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction

Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal

Abstract:

Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.

Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction

Procedia PDF Downloads 139
68 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data

Authors: Nicola Colaninno, Eugenio Morello

Abstract:

The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.

Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing

Procedia PDF Downloads 195
67 Is Materiality Determination the Key to Integrating Corporate Sustainability and Maximising Value?

Authors: Ruth Hegarty, Noel Connaughton

Abstract:

Sustainability reporting has become a priority for many global multinational companies. This is associated with ever-increasing expectations from key stakeholders for companies to be transparent about their strategies, activities and management with regard to sustainability issues. The Global Reporting Initiative (GRI) encourages reporters to only provide information on the issues that are really critical in order to achieve the organisation’s goals for sustainability and manage its impact on environment and society. A key challenge for most reporting organisations is how to identify relevant issues for sustainability reporting and prioritise those material issues in accordance with company and stakeholder needs. A recent study indicates that most of the largest companies listed on the world’s stock exchanges are failing to provide data on key sustainability indicators such as employee turnover, energy, greenhouse gas emissions (GHGs), injury rate, pay equity, waste and water. This paper takes an indepth look at the approaches used by a select number of international sustainability leader corporates to identify key sustainability issues. The research methodology involves performing a detailed analysis of the sustainability report content of up to 50 companies listed on the 2014 Dow Jones Sustainability Indices (DJSI). The most recent sustainability report content found on the GRI Sustainability Disclosure Database is then compared with 91 GRI Specific Standard Disclosures and a small number of GRI Standard Disclosures. Preliminary research indicates significant gaps in the information disclosed in corporate sustainability reports versus the indicator content specified in the GRI Content Index. The following outlines some of the key findings to date: Most companies made a partial disclosure with regard to the Economic indicators of climate change risks and infrastructure investments, but did not focus on the associated negative impacts. The top Environmental indicators disclosed were energy consumption and reductions, GHG emissions, water withdrawals, waste and compliance. The lowest rates of indicator disclosure included biodiversity, water discharge, mitigation of environmental impacts of products and services, transport, environmental investments, screening of new suppliers and supply chain impacts. The top Social indicators disclosed were new employee hires, rates of injury, freedom of association in operations, child labour and forced labour. Lesser disclosure rates were reported for employee training, composition of governance bodies and employees, political contributions, corruption and fines for non-compliance. The reporting on most other Social indicators was found to be poor. In addition, most companies give only a brief explanation on how material issues are defined, identified and ranked. Data on the identification of key stakeholders and the degree and nature of engagement for determining issues and their weightings is also lacking. Generally, little to no data is provided on the algorithms used to score an issue. Research indicates that most companies lack a rigorous and thorough methodology to systematically determine the material issues of sustainability reporting in accordance with company and stakeholder needs.

Keywords: identification of key stakeholders, material issues, sustainability reporting, transparency

Procedia PDF Downloads 307
66 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads

Authors: Gaurav Kumar Sinha

Abstract:

In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.

Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies

Procedia PDF Downloads 68
65 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations

Authors: Zhao Gao, Eran Edirisinghe

Abstract:

The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.

Keywords: RNN, GAN, NLP, facial composition, criminal investigation

Procedia PDF Downloads 162
64 Design and Construction of a Home-Based, Patient-Led, Therapeutic, Post-Stroke Recovery System Using Iterative Learning Control

Authors: Marco Frieslaar, Bing Chu, Eric Rogers

Abstract:

Stroke is a devastating illness that is the second biggest cause of death in the world (after heart disease). Where it does not kill, it leaves survivors with debilitating sensory and physical impairments that not only seriously harm their quality of life, but also cause a high incidence of severe depression. It is widely accepted that early intervention is essential for recovery, but current rehabilitation techniques largely favor hospital-based therapies which have restricted access, expensive and specialist equipment and tend to side-step the emotional challenges. In addition, there is insufficient funding available to provide the long-term assistance that is required. As a consequence, recovery rates are poor. The relatively unexplored solution is to develop therapies that can be harnessed in the home and are formulated from technologies that already exist in everyday life. This would empower individuals to take control of their own improvement and provide choice in terms of when and where they feel best able to undertake their own healing. This research seeks to identify how effective post-stroke, rehabilitation therapy can be applied to upper limb mobility, within the physical context of a home rather than a hospital. This is being achieved through the design and construction of an automation scheme, based on iterative learning control and the Riener muscle model, that has the ability to adapt to the user and react to their level of fatigue and provide tangible physical recovery. It utilizes a SMART Phone and laptop to construct an iterative learning control (ILC) system, that monitors upper arm movement in three dimensions, as a series of exercises are undertaken. The equipment generates functional electrical stimulation to assist in muscle activation and thus improve directional accuracy. In addition, it monitors speed, accuracy, areas of motion weakness and similar parameters to create a performance index that can be compared over time and extrapolated to establish an independent and objective assessment scheme, plus an approximate estimation of predicted final outcome. To further extend its assessment capabilities, nerve conduction velocity readings are taken by the software, between the shoulder and hand muscles. This is utilized to measure the speed of response of neuron signal transfer along the arm and over time, an online indication of regeneration levels can be obtained. This will prove whether or not sufficient training intensity is being achieved even before perceivable movement dexterity is observed. The device also provides the option to connect to other users, via the internet, so that the patient can avoid feelings of isolation and can undertake movement exercises together with others in a similar position. This should create benefits not only for the encouragement of rehabilitation participation, but also an emotional support network potential. It is intended that this approach will extend the availability of stroke recovery options, enable ease of access at a low cost, reduce susceptibility to depression and through these endeavors, enhance the overall recovery success rate.

Keywords: home-based therapy, iterative learning control, Riener muscle model, SMART phone, stroke rehabilitation

Procedia PDF Downloads 264
63 Ordered Mesoporous Carbons of Different Morphology for Loading and Controlled Release of Active Pharmaceutical Ingredients

Authors: Aleksander Ejsmont, Aleksandra Galarda, Joanna Goscianska

Abstract:

Smart porous carriers with defined structure and physicochemical properties are required for releasing the therapeutic drug with precise control of delivery time and location in the body. Due to their non-toxicity, ordered structure, chemical, and thermal stability, mesoporous carbons can be considered as modern carriers for active pharmaceutical ingredients (APIs) whose effectiveness needs frequent dosing algorithms. Such an API-carrier system, if programmed precisely, may stabilize the pharmaceutical and increase its dissolution leading to enhanced bioavailability. The substance conjugated with the material, through its prior adsorption, can later be successfully applied internally to the organism, as well as externally if the API release is feasible under these conditions. In the present study, ordered mesoporous carbons of different morphologies and structures, prepared by hard template method, were applied as carriers in the adsorption and controlled release of active pharmaceutical ingredients. In the first stage, the carbon materials were synthesized and functionalized with carboxylic groups by chemical oxidation using ammonium persulfate solution and then with amine groups. Materials obtained were thoroughly characterized with respect to morphology (scanning electron microscopy), structure (X-ray diffraction, transmission electron microscopy), characteristic functional groups (FT-IR spectroscopy), acid-base nature of surface groups (Boehm titration), parameters of the porous structure (low-temperature nitrogen adsorption) and thermal stability (TG analysis). This was followed by a series of tests of adsorption and release of paracetamol, benzocaine, and losartan potassium. Drug release experiments were performed in the simulated gastric fluid of pH 1.2 and phosphate buffer of pH 7.2 or 6.8 at 37.0 °C. The XRD patterns in the small-angle range and TEM images revealed that functionalization of mesoporous carbons with carboxylic or amine groups leads to the decreased ordering of their structure. Moreover, the modification caused a considerable reduction of the carbon-specific surface area and pore volume, but it simultaneously resulted in changing their acid-base properties. Mesoporous carbon materials exhibit different morphologies, which affect the host-guest interactions during the adsorption process of active pharmaceutical ingredients. All mesoporous carbons show high adsorption capacity towards drugs. The sorption capacity of materials is mainly affected by BET surface area and the structure/size matching between adsorbent and adsorbate. Selected APIs are linked to the surface of carbon materials mainly by hydrogen bonds, van der Waals forces, and electrostatic interactions. The release behavior of API is highly dependent on the physicochemical properties of mesoporous carbons. The release rate of APIs could be regulated by the introduction of functional groups and by changing the pH of the receptor medium. Acknowledgments—This research was supported by the National Science Centre, Poland (project SONATA-12 no: 2016/23/D/NZ7/01347).

Keywords: ordered mesoporous carbons, sorption capacity, drug delivery, carbon nanocarriers

Procedia PDF Downloads 176