Search results for: multivariate time series data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 37840

Search results for: multivariate time series data

35620 Observing Upin and Ipin Animation Roles in Early Childhood Education

Authors: Juhanita Jiman

Abstract:

Malaysia is a unique country with multifaceted society; rich with its beautiful cultural values. It has been a long assimilation process for Malaysia to emerge its national identity. Malaysian government has been working hard for centuries to keep its people together in harmony. Cultural identity is identified to be ‘container’ that brings Malaysians together. The uniqueness of Malaysian cultures can actually be exploited for the benefit of the nation. However, this unique culture is somehow being threatened by those imported foreign values. If not closely monitored, these foreign influences can bring more damages than good. This paper aims to study elements in Upin and Ipin animation series and investigate how this series could help to educate local children with good moral and behaviour without being too serious and sententious. Upin and Ipin is chosen as a study to investigate the effectiveness of animation as a media of communication to promote positive values amongst pre-school children. Purposive sampling method was employed to determine the sample of studies hence pre-school children from Putrajaya Presint 9(2) school were chosen to take part in this study. The findings of this study demonstrate positive suggestions on how animation programmes being shown on TV can play significant roles in children social development and inculcate good moral behaviour as well as social skills among children and people around them.

Keywords: animation characters, children informal education, foreign influences, moral values

Procedia PDF Downloads 162
35619 The Use of Electrical Resistivity Measurement, Cracking Test and Ansys Simulation to Predict Concrete Hydration Behavior and Crack Tendency

Authors: Samaila Bawa Muazu

Abstract:

Hydration process, crack potential and setting time of concrete grade C30, C40 and C50 were separately monitored using non-contact electrical resistivity apparatus, a novel plastic ring mould and penetration resistance method respectively. The results show highest resistivity of C30 at the beginning until reaching the acceleration point when C50 accelerated and overtaken the others, and this period corresponds to its final setting time range, from resistivity derivative curve, hydration process can be divided into dissolution, induction, acceleration and deceleration periods, restrained shrinkage crack and setting time tests demonstrated the earliest cracking and setting time of C50, therefore, this method conveniently and rapidly determines the concrete’s crack potential. The highest inflection time (ti), the final setting time (tf) were obtained and used with crack time in coming up with mathematical models for the prediction of concrete’s cracking age for the range being considered. Finally, ANSYS numerical simulations supports the experimental findings in terms of the earliest crack age of C50 and the crack location that, highest stress concentration is always beneath the artificially introduced expansion joint of C50.

Keywords: concrete hydration, electrical resistivity, restrained shrinkage crack, setting time, simulation

Procedia PDF Downloads 189
35618 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors

Authors: Jakob Krause

Abstract:

Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.

Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling

Procedia PDF Downloads 128
35617 A Procedure for Post-Earthquake Damage Estimation Based on Detection of High-Frequency Transients

Authors: Aleksandar Zhelyazkov, Daniele Zonta, Helmut Wenzel, Peter Furtner

Abstract:

In the current research structural health monitoring is considered for addressing the critical issue of post-earthquake damage detection. A non-standard approach for damage detection via acoustic emission is presented - acoustic emissions are monitored in the low frequency range (up to 120 Hz). Such emissions are termed high-frequency transients. Further a damage indicator defined as the Time-Ratio Damage Indicator is introduced. The indicator relies on time-instance measurements of damage initiation and deformation peaks. Based on the time-instance measurements a procedure for estimation of the maximum drift ratio is proposed. Monitoring data is used from a shaking-table test of a full-scale reinforced concrete bridge pier. Damage of the experimental column is successfully detected and the proposed damage indicator is calculated.

Keywords: acoustic emission, damage detection, shaking table test, structural health monitoring

Procedia PDF Downloads 212
35616 Analysis of Commercial Cow and Camel Milk by Nuclear Magnetic Resonance

Authors: Lucia Pappalardo, Sara Abdul Majid Azzam

Abstract:

Camel milk is widely consumed by people living in arid areas of the world, where it is also known for its potential therapeutic and medical properties. Indeed it has been used as a treatment for several diseases such as tuberculosis, dropsy, asthma, jaundice and leishmaniasis in India, Sudan and some parts of Russia. A wealth of references is available in literature for the composition of milk from different diary animals such as cows, goats and sheep. Camel milk instead has not been extensively studied, despite its nutritional value. In this study commercial cow and camel milk samples, bought from the local market, were analyzed by 1D 1H-NMR and multivariate statistics in order to identify the different composition of the low-molecular-weight compounds in the milk mixtures. The samples were analyzed in their native conditions without any pre-treatment. Our preliminary study shows that the two different types of milk samples differ in the content of metabolites such as orotate, fats and more.

Keywords: camel, cow, milk, Nuclear Magnetic Resonance (NMR)

Procedia PDF Downloads 532
35615 Considerations for the Use of High Intensity Interval Training in Secondary Physical Education

Authors: Amy Stringer, Resa Chandler

Abstract:

High Intensity Interval Training (HIIT) involves a 3-10-minute circuit of various exercises which is a viable alternative to a traditional cardiovascular and strength training regimen. Research suggests that measures of health-related fitness can either be maintained or actually improve with the use of this training method. After conducting a 6-week HIIT research study with 10-14 year old children, considerations for using a daily HIIT workout are presented. Is the use of HIIT with children a reasonable consideration for physical education programs? The benefits and challenges of this type of an intervention are identified. This study is significant in that achieving fitness gains in a small amount of daily class time is an attractive concept – especially for physical education teachers who often do not have the time necessary to accomplish all of their curricular goals in the amount of class time assigned. Basic methodologies include students participating in a circuit of exercises for 7-10 minutes at 80-95% of max heart rate as measured by heart rate monitors. Student pre and post fitness test data were collected for cardio-vascular endurance, muscular endurance, and body composition. Research notes as well as commentary by the teachers and researchers who participated in the HIIT study contributed to the understanding of the cost-benefit analysis. Major findings of the study are that HIIT has limited effectiveness but is a good choice for limited class times. Student efficacy of their ability to complete the exercises and visible heart rate data were considered to be significant factors in success of the HIIT study. The effective use of technology promoting positive audience effect during the display of heart rate data was more important at the beginning of the study than at the end. Student ‘buy-in’ and motivation, teacher motivation and ‘buy-in’, the variety of activities in the circuit and the fitness level of the student at the beginning of the study were also findings influencing the fitness outcomes of the study. Concluding Statement: High intensity interval training can be used effectively in a secondary physical education program. It is not a ‘magic bullet’ to produce health-related fitness outcomes in every student but it is an effective tool to enhance student fitness in a limited time and contribute to the goals of the program.

Keywords: cardio vascular fitness, children, high intensity interval training, physical education

Procedia PDF Downloads 102
35614 Simulations to Predict Solar Energy Potential by ERA5 Application at North Africa

Authors: U. Ali Rahoma, Nabil Esawy, Fawzia Ibrahim Moursy, A. H. Hassan, Samy A. Khalil, Ashraf S. Khamees

Abstract:

The design of any solar energy conversion system requires the knowledge of solar radiation data obtained over a long period. Satellite data has been widely used to estimate solar energy where no ground observation of solar radiation is available, yet there are limitations on the temporal coverage of satellite data. Reanalysis is a “retrospective analysis” of the atmosphere parameters generated by assimilating observation data from various sources, including ground observation, satellites, ships, and aircraft observation with the output of NWP (Numerical Weather Prediction) models, to develop an exhaustive record of weather and climate parameters. The evaluation of the performance of reanalysis datasets (ERA-5) for North Africa against high-quality surface measured data was performed using statistical analysis. The estimation of global solar radiation (GSR) distribution over six different selected locations in North Africa during ten years from the period time 2011 to 2020. The root means square error (RMSE), mean bias error (MBE) and mean absolute error (MAE) of reanalysis data of solar radiation range from 0.079 to 0.222, 0.0145 to 0.198, and 0.055 to 0.178, respectively. The seasonal statistical analysis was performed to study seasonal variation of performance of datasets, which reveals the significant variation of errors in different seasons—the performance of the dataset changes by changing the temporal resolution of the data used for comparison. The monthly mean values of data show better performance, but the accuracy of data is compromised. The solar radiation data of ERA-5 is used for preliminary solar resource assessment and power estimation. The correlation coefficient (R2) varies from 0.93 to 99% for the different selected sites in North Africa in the present research. The goal of this research is to give a good representation for global solar radiation to help in solar energy application in all fields, and this can be done by using gridded data from European Centre for Medium-Range Weather Forecasts ECMWF and producing a new model to give a good result.

Keywords: solar energy, solar radiation, ERA-5, potential energy

Procedia PDF Downloads 193
35613 Rapid Monitoring of Earthquake Damages Using Optical and SAR Data

Authors: Saeid Gharechelou, Ryutaro Tateishi

Abstract:

Earthquake is an inevitable catastrophic natural disaster. The damages of buildings and man-made structures, where most of the human activities occur are the major cause of casualties from earthquakes. A comparison of optical and SAR data is presented in the case of Kathmandu valley which was hardly shaken by 2015-Nepal Earthquake. Though many existing researchers have conducted optical data based estimated or suggested combined use of optical and SAR data for improved accuracy, however finding cloud-free optical images when urgently needed are not assured. Therefore, this research is specializd in developing SAR based technique with the target of rapid and accurate geospatial reporting. Should considers that limited time available in post-disaster situation offering quick computation exclusively based on two pairs of pre-seismic and co-seismic single look complex (SLC) images. The InSAR coherence pre-seismic, co-seismic and post-seismic was used to detect the change in damaged area. In addition, the ground truth data from field applied to optical data by random forest classification for detection of damaged area. The ground truth data collected in the field were used to assess the accuracy of supervised classification approach. Though a higher accuracy obtained from the optical data then integration by optical-SAR data. Limitation of cloud-free images when urgently needed for earthquak evevent are and is not assured, thus further research on improving the SAR based damage detection is suggested. Availability of very accurate damage information is expected for channelling the rescue and emergency operations. It is expected that the quick reporting of the post-disaster damage situation quantified by the rapid earthquake assessment should assist in channeling the rescue and emergency operations, and in informing the public about the scale of damage.

Keywords: Sentinel-1A data, Landsat-8, earthquake damage, InSAR, rapid damage monitoring, 2015-Nepal earthquake

Procedia PDF Downloads 152
35612 Effectiveness of Group Therapy Based on Acceptance and Commitment on Self-Criticism and Coping Mechanism in People with Addiction

Authors: Mohamad Reza Khodabakhsh

Abstract:

Drug use and addiction are major biological, psychological, and social problems. In drug abuse treatment, it is important to pay attention to personality problems and coping methods of patients. Today, the third-wave treatments in psychotherapy emphasize people's awareness and acceptance of feelings and emotions, cognitions, and behaviors instead of challenging cognitions. For this reason, this research was conducted with the aim of investigating the effectiveness of group therapy based on acceptance and commitment to self-criticism and coping strategies of people with drug use disorder. This research was a quasi-experimental type of research (pre-test-post-test design with an unequal control group), and the statistical population of this research included all men with drug use disorder in Mashhad, 174 of whom among the 75 people eligible for this research, 30 of them were selected by available sampling method and randomly assigned to two experimental and control groups. In this research, Gilbert's self-criticism scale was used to measure self-criticism, and Andler and Barker's coping strategies questionnaire was used to measure coping strategies. Therapeutic intervention (treatment based on acceptance and commitment) was performed on the experimental group for eight sessions of 90 minutes, and then post-tests were taken from both groups, and multivariate analysis of covariance (MANCOVA) was used to analyze the data. The results showed that treatment based on acceptance and commitment significantly reduced self-criticism and improved coping strategies used by patients with drug use disorder (p>0.01). Therefore, treatment based on acceptance and commitment has been effective in reducing self-criticism and improving the coping strategies of patients with drug use disorder due to teaching clients to accept thoughts and conditions.

Keywords: treatment based on acceptance and commitment, self-criticism, coping strategies, addiction

Procedia PDF Downloads 69
35611 Towards a More Inclusive Society: A Study on the Assimilation and Integration of the Migrant Children in Kerala

Authors: Arun Perumbilavil Anand

Abstract:

For the past few years, the state of Kerala has been witnessing a large inflow of migrant workers from other states of the country, which emerged as a result of demographic transition and Gulf emigration. The in-migration patterns in Kerala have changed over the time with the migrants having a higher residence history bringing their families to the state, thereby making the process more complicated and divergent in its approach. These developments have led to an increase in the young migrant population at least in some parts of the state, which has opened up doubts and questions related to their future in the host society. At this juncture, the study ponders into the factors that are associated with the assimilation and wellbeing of migrant children in the society of Kerala. As one of the objectives, the study also analyzed the influence and role played by the educational institutions (both public and private) in meeting the needs and aspirations of both the children and their parents. The study gains significance as it tries to identify various impediments that hinder the cognitive skill formation and behaviour patterns of the migrant children in the host society. Data and Methodology: The study is based on the primary data collected through a series of interviews and interactions held with parents, children, and teachers of different educational institutions, including both public and private. The primary survey also made use of research techniques like observation, in-depth interviews, and case study method. The study was conducted in schools in the Kanjikode area of the Palakkad district in Kerala. The findings of the study are on the basis of a survey conducted in four schools and 40 migrant children. Findings: The study found that majority of the children have wholly integrated and assimilated into the host society. The influence of the peer group was quite visible in giving stimulus to the assimilation process. Most of the children do not have any emotional or cultural sentiments attached to their state of origin, and they consider Kerala as their ‘home state’ and the local language (Malayalam) as their ‘mother tongue'. The study could also find that the existing education system in the host society fails to meet the needs and aspirations of migrants as well as that of their children. On a comparative scale, to some extent, private schools have succeeded in fulfiling the special requirements of the migrant children. An interesting point that the study could pinpoint at is that the children of the migrants show better health conditions and wellbeing than compared to the natives, which is usually addressed as an epidemiologic paradox. As a concluding remark, the study recommends the inclusion concept of inclusive education into the education system of the state with giving due emphasis on those who are at higher risk of being excluded or marginalized, along with fostering increased interaction between diverse groups.

Keywords: assimilation, Kerala, migrant children, well-being

Procedia PDF Downloads 153
35610 Reduced Power Consumption by Randomization for DSI3

Authors: David Levy

Abstract:

The newly released Distributed System Interface 3 (DSI3) Bus Standard specification defines 3 modulation levels from which 16 valid symbols are coded. This structure creates power consumption variations depending on the transmitted data of a factor of more than 2 between minimum and maximum. The power generation unit has to consider therefore the worst case maximum consumption all the time and be built accordingly. This paper proposes a method to reduce both the average current consumption and worst case current consumption. The transmitter randomizes the data using several pseudo-random sequences. It then estimates the energy consumption of the generated frames and selects to transmit the one which consumes the least. The transmitter also prepends the index of the pseudo-random sequence, which is not randomized, to allow the receiver to recover the original data using the correct sequence. We show that in the case that the frame occupies most of the DSI3 synchronization period, we achieve average power consumption reduction by up to 13% and the worst case power consumption is reduced by 17.7%.

Keywords: DSI3, energy, power consumption, randomization

Procedia PDF Downloads 516
35609 An Attempt to Measure Afro-Polychronism Empirically

Authors: Aïda C. Terblanché-Greeff

Abstract:

Afro-polychronism is a unique amalgamated cultural value of social self-construal and time orientation. As such, the construct Afro-polychronism is conceptually analysed by focusing on the aspects of Ubuntu as collectivism and African time as polychronism. It is argued that these cultural values have a reciprocal and thus inseparable relationship. As it is general practice to measure cultural values empirically, the author conducted empirically engaged philosophy and aimed to develop a scale to measure Afro-polychronism based on its two dimensions of Ubuntu as social self-construal and African time as time orientation. From the scale’s psychometric properties, it was determined that the scale was, in fact, not reliable and valid. It was found that the correlation between the Ubuntu dimension and the African time is moderate (albeit statistically significant). In conclusion, the author abduced why this cultural value cannot be empirically measured based on its theoretical definition and indicated which different path would be more promising.

Keywords: African time, Afro-polychronism, empirically engaged African philosophy, Ubuntu

Procedia PDF Downloads 127
35608 Conductivity-Depth Inversion of Large Loop Transient Electromagnetic Sounding Data over Layered Earth Models

Authors: Ravi Ande, Mousumi Hazari

Abstract:

One of the common geophysical techniques for mapping subsurface geo-electrical structures, extensive hydro-geological research, and engineering and environmental geophysics applications is the use of time domain electromagnetic (TDEM)/transient electromagnetic (TEM) soundings. A large transmitter loop for energising the ground and a small receiver loop or magnetometer for recording the transient voltage or magnetic field in the air or on the surface of the earth, with the receiver at the center of the loop or at any random point inside or outside the source loop, make up a large loop TEM system. In general, one can acquire data using one of the configurations with a large loop source, namely, with the receiver at the center point of the loop (central loop method), at an arbitrary in-loop point (in-loop method), coincident with the transmitter loop (coincidence-loop method), and at an arbitrary offset loop point (offset-loop method), respectively. Because of the mathematical simplicity associated with the expressions of EM fields, as compared to the in-loop and offset-loop systems, the central loop system (for ground surveys) and coincident loop system (for ground as well as airborne surveys) have been developed and used extensively for the exploration of mineral and geothermal resources, for mapping contaminated groundwater caused by hazardous waste and thickness of permafrost layer. Because a proper analytical expression for the TEM response over the layered earth model for the large loop TEM system does not exist, the forward problem used in this inversion scheme is first formulated in the frequency domain and then it is transformed in the time domain using Fourier cosine or sine transforms. Using the EMLCLLER algorithm, the forward computation is initially carried out in the frequency domain. As a result, the EMLCLLER modified the forward calculation scheme in NLSTCI to compute frequency domain answers before converting them to the time domain using Fourier Cosine and/or Sine transforms.

Keywords: time domain electromagnetic (TDEM), TEM system, geoelectrical sounding structure, Fourier cosine

Procedia PDF Downloads 74
35607 An Automated Approach to the Nozzle Configuration of Polycrystalline Diamond Compact Drill Bits for Effective Cuttings Removal

Authors: R. Suresh, Pavan Kumar Nimmagadda, Ming Zo Tan, Shane Hart, Sharp Ugwuocha

Abstract:

Polycrystalline diamond compact (PDC) drill bits are extensively used in the oil and gas industry as well as the mining industry. Industry engineers continually improve upon PDC drill bit designs and hydraulic conditions. Optimized injection nozzles play a key role in improving the drilling performance and efficiency of these ever changing PDC drill bits. In the first part of this study, computational fluid dynamics (CFD) modelling is performed to investigate the hydrodynamic characteristics of drilling fluid flow around the PDC drill bit. An Open-source CFD software – OpenFOAM simulates the flow around the drill bit, based on the field input data. A specifically developed console application integrates the entire CFD process including, domain extraction, meshing, and solving governing equations and post-processing. The results from the OpenFOAM solver are then compared with that of the ANSYS Fluent software. The data from both software programs agree. The second part of the paper describes the parametric study of the PDC drill bit nozzle to determine the effect of parameters such as number of nozzles, nozzle velocity, nozzle radial position and orientations on the flow field characteristics and bit washing patterns. After analyzing a series of nozzle configurations, the best configuration is identified and recommendations are made for modifying the PDC bit design.

Keywords: ANSYS Fluent, computational fluid dynamics, nozzle configuration, OpenFOAM, PDC dill bit

Procedia PDF Downloads 401
35606 Data-Driven Surrogate Models for Damage Prediction of Steel Liquid Storage Tanks under Seismic Hazard

Authors: Laura Micheli, Majd Hijazi, Mahmoud Faytarouni

Abstract:

The damage reported by oil and gas industrial facilities revealed the utmost vulnerability of steel liquid storage tanks to seismic events. The failure of steel storage tanks may yield devastating and long-lasting consequences on built and natural environments, including the release of hazardous substances, uncontrolled fires, and soil contamination with hazardous materials. It is, therefore, fundamental to reliably predict the damage that steel liquid storage tanks will likely experience under future seismic hazard events. The seismic performance of steel liquid storage tanks is usually assessed using vulnerability curves obtained from the numerical simulation of a tank under different hazard scenarios. However, the computational demand of high-fidelity numerical simulation models, such as finite element models, makes the vulnerability assessment of liquid storage tanks time-consuming and often impractical. As a solution, this paper presents a surrogate model-based strategy for predicting seismic-induced damage in steel liquid storage tanks. In the proposed strategy, the surrogate model is leveraged to reduce the computational demand of time-consuming numerical simulations. To create the data set for training the surrogate model, field damage data from past earthquakes reconnaissance surveys and reports are collected. Features representative of steel liquid storage tank characteristics (e.g., diameter, height, liquid level, yielding stress) and seismic excitation parameters (e.g., peak ground acceleration, magnitude) are extracted from the field damage data. The collected data are then utilized to train a surrogate model that maps the relationship between tank characteristics, seismic hazard parameters, and seismic-induced damage via a data-driven surrogate model. Different types of surrogate algorithms, including naïve Bayes, k-nearest neighbors, decision tree, and random forest, are investigated, and results in terms of accuracy are reported. The model that yields the most accurate predictions is employed to predict future damage as a function of tank characteristics and seismic hazard intensity level. Results show that the proposed approach can be used to estimate the extent of damage in steel liquid storage tanks, where the use of data-driven surrogates represents a viable alternative to computationally expensive numerical simulation models.

Keywords: damage prediction , data-driven model, seismic performance, steel liquid storage tanks, surrogate model

Procedia PDF Downloads 130
35605 Synthesis of a Library of Substituted Isoquinolines Based on a Triazolization Strategy, and Their Anti-HIV and C-X-C Chemokine Receptor Type 4 Antagonist Activity

Authors: Mastaneh Safarnejad Shad, Wim Dehaen, Steven De Jonghe

Abstract:

Since CXCR4 is the main coreceptor of HIV-1 and plays an important role in human immunodeficiency virus (HIV) entry, numerous efforts were directed towards the discovery of new classes of small molecules that act as CXCR4 antagonists. In addition, CXCR4 antagonists are potentially useful in the treatment of several other disorders, such as cancer cell metastasis, leukemia cell proliferation, rheumatoid arthritis, and pulmonary fibrosis. Since AMD3100 (plerixafor) is the only CXCR4 antagonist which obtained approval by the Food and Drug Administration (FDA), we were motivated to investigate a new category of molecules as CXCR4 antagonists. Most of the scaffolds which have been studied so far as CXCR4 antagonists are based on the tetrahydroquinoline (THQ) moiety in which AMD11070 (mavorixafor), GSK-812394, and TIQ15 displayed the most potent CXCR4 antagonism. Due to the high potency of these scaffolds, two different series of compounds were prepared in this work. In the first set, the THQ moiety is coupled to an amine chain and various isoquinoline derivatives (prepared by an in-house developed triazolization strategy), of which the upper part of molecules is identical to AMD11070 and TIQ15. In the second category of compounds, the THQ moiety was simplified by the synthesis of a substituted pyridine moiety. In order to investigate if CXCR4 antagonism requires the presence of an isoquinoline moiety, the corresponding pyridine analogues were also prepared. In both series of compounds, potent CXCR4 antagonism was noticed.

Keywords: CXCR4 coreceptor, CXCR4 antagonists, HIV inhibitor, tetrahydroquinoline

Procedia PDF Downloads 176
35604 The Utility of Sonographic Features of Lymph Nodes during EBUS-TBNA for Predicting Malignancy

Authors: Atefeh Abedini, Fatemeh Razavi, Mihan Pourabdollah Toutkaboni, Hossein Mehravaran, Arda Kiani

Abstract:

In countries with the highest prevalence of tuberculosis, such as Iran, the differentiation of malignant tumors from non-malignant is very important. In this study, which was conducted for the first time among the Iranian population, the utility of the ultrasonographic morphological characteristics in patients undergoing EBUS was used to distinguish the non-malignant versus malignant lymph nodes. The morphological characteristics of lymph nodes, which consist of size, shape, vascular pattern, echogenicity, margin, coagulation necrosis sign, calcification, and central hilar structure, were obtained during Endobronchial Ultrasound-Guided Trans-Bronchial Needle Aspiration and were compared with the final pathology results. During this study period, a total of 253 lymph nodes were evaluated in 93 cases. Round shape, non-hilar vascular pattern, heterogeneous echogenicity, hyperechogenicity, distinct margin, and the presence of necrosis sign were significantly higher in malignant nodes. On the other hand, the presence of calcification and also central hilar structure were significantly higher in the benign nodes (p-value ˂ 0.05). Multivariate logistic regression showed that size>1 cm, heterogeneous echogenicity, hyperechogenicity, the presence of necrosis signs and, the absence of central hilar structure are independent predictive factors for malignancy. The accuracy of each of the aforementioned factors is 42.29 %, 71.54 %, 71.90 %, 73.51 %, and 65.61 %, respectively. Of 74 malignant lymph nodes, 100% had at least one of these independent factors. According to our results, the morphological characteristics of lymph nodes based on Endobronchial Ultrasound-Guided Trans-Bronchial Needle Aspiration can play a role in the prediction of malignancy.

Keywords: EBUS-TBNA, malignancy, nodal characteristics, pathology

Procedia PDF Downloads 117
35603 Obstacle Classification Method Based on 2D LIDAR Database

Authors: Moohyun Lee, Soojung Hur, Yongwan Park

Abstract:

In this paper is proposed a method uses only LIDAR system to classification an obstacle and determine its type by establishing database for classifying obstacles based on LIDAR. The existing LIDAR system, in determining the recognition of obstruction in an autonomous vehicle, has an advantage in terms of accuracy and shorter recognition time. However, it was difficult to determine the type of obstacle and therefore accurate path planning based on the type of obstacle was not possible. In order to overcome this problem, a method of classifying obstacle type based on existing LIDAR and using the width of obstacle materials was proposed. However, width measurement was not sufficient to improve accuracy. In this research, the width data was used to do the first classification; database for LIDAR intensity data by four major obstacle materials on the road were created; comparison is made to the LIDAR intensity data of actual obstacle materials; and determine the obstacle type by finding the one with highest similarity values. An experiment using an actual autonomous vehicle under real environment shows that data declined in quality in comparison to 3D LIDAR and it was possible to classify obstacle materials using 2D LIDAR.

Keywords: obstacle, classification, database, LIDAR, segmentation, intensity

Procedia PDF Downloads 322
35602 Crowdsensing Project in the Brazilian Municipality of Florianópolis for the Number of Visitors Measurement

Authors: Carlos Roberto De Rolt, Julio da Silva Dias, Rafael Tezza, Luca Foschini, Matteo Mura

Abstract:

The seasonal population fluctuation presents a challenge to touristic cities since the number of inhabitants can double according to the season. The aim of this work is to develop a model that correlates the waste collected with the population of the city and also allow cooperation between the inhabitants and the local government. The model allows public managers to evaluate the impact of the seasonal population fluctuation on waste generation and also to improve planning resource utilization throughout the year. The study uses data from the company that collects the garbage in Florianópolis, a Brazilian city that presents the profile of a city that attracts tourists due to numerous beaches and warm weather. The fluctuations are caused by the number of people that come to the city throughout the year for holidays, summer time vacations or business events. Crowdsensing will be accomplished through smartphones with access to an app for data collection, with voluntary participation of the population. Crowdsensing participants can access information collected in waves for this portal. Crowdsensing represents an innovative and participatory approach which involves the population in gathering information to improve the quality of life. The management of crowdsensing solutions plays an essential role given the complexity to foster collaboration, establish available sensors and collect and process the collected data. Practical implications of this tool described in this paper refer, for example, to the management of seasonal tourism in a large municipality, whose public services are impacted by the floating of the population. Crowdsensing and big data support managers in predicting the arrival, permanence, and movement of people in a given urban area. Also, by linking crowdsourced data to databases from other public service providers - e.g., water, garbage collection, electricity, public transport, telecommunications - it is possible to estimate the floating of the population of an urban area affected by seasonal tourism. This approach supports the municipality in increasing the effectiveness of resource allocation while, at the same time, increasing the quality of the service as perceived by citizens and tourists.

Keywords: big data, dashboards, floating population, smart city, urban management solutions

Procedia PDF Downloads 270
35601 Ontological Modeling Approach for Statistical Databases Publication in Linked Open Data

Authors: Bourama Mane, Ibrahima Fall, Mamadou Samba Camara, Alassane Bah

Abstract:

At the level of the National Statistical Institutes, there is a large volume of data which is generally in a format which conditions the method of publication of the information they contain. Each household or business data collection project includes a dissemination platform for its implementation. Thus, these dissemination methods previously used, do not promote rapid access to information and especially does not offer the option of being able to link data for in-depth processing. In this paper, we present an approach to modeling these data to publish them in a format intended for the Semantic Web. Our objective is to be able to publish all this data in a single platform and offer the option to link with other external data sources. An application of the approach will be made on data from major national surveys such as the one on employment, poverty, child labor and the general census of the population of Senegal.

Keywords: Semantic Web, linked open data, database, statistic

Procedia PDF Downloads 162
35600 An Evaluation of Neuropsychiatric Manifestations in Systemic Lupus Erythematosus Patients in Saudi Arabia and Their Associated Factors

Authors: Yousef M. Alammari, Mahmoud A. Gaddoury, Reem A. Almohaini, Sara A. Alharbi, Lena S. Alsaleem, Lujain H. Allowaihiq, Maha H. Alrashid, Abdullah H. Alghamdi, Abdullah A. Alaryni

Abstract:

Objective: The goal of this study was to establish the prevalence of neuropsychiatric symptoms in systemic lupus erythematosus (NPSLE) patients in Saudi Arabia and the variables that are linked to it. Methods: During June 2021, this cross-sectional study was carried out among SLE patients in Saudi Arabia. The Saudi Rheumatism Association exploited social media platforms to provide a self-administered online questionnaire to SLE patients. All data analyses were performed using the Statistical Packages for Social Sciences (SPSS) version 26. Results: Two hundred and five SLE patients participated in the study (females 91.3 % vs. males 8.7 %). In addition, 13.5 % of patients had a family history of SLE, and 26% had SLE for one to three years. Alteration or loss of sensation (53.4%), Fear (52.4%), and headache (48.1%) were the most prevalent signs of neuropsychiatric symptoms in systemic lupus erythematosus (NPSLE) patients. The prevalence of patients with NPSLE was 40%. In a multivariate regression model, fear, altered sensations, cerebrovascular illness, sleep disruption, and diminished interest in routine activities were identified as independent risk variables for NPSLE. Conclusion: Nearly half of SLE patients demonstrated NP manifestations, with significant symptoms including fear, alteration of sensation, cerebrovascular disease, sleep disturbance, and reduced interest in normal activities. To detect the pathophysiology of NPSLE, it is necessary to understand the relationship between neuropsychiatric morbidity and other relevant rheumatic disorders in the SLE population.

Keywords: neuropsychiatric, systemic lupus erythematosus, NPSLE, prevalence, SLE patients

Procedia PDF Downloads 58
35599 Comparison of Mean Monthly Soil Temperature at (5 and 30 cm) Depths at Compton Experimental Site, West Midlands (UK), between 1976-2008

Authors: Aminu Mansur

Abstract:

A comparison of soil temperature at (5 and 30 cm) depths at a research site over the period (1976-2008) was analyzed. Based on the statistical analysis of the database of (12,045) days of individual soil temperature measurements in sandy-loam of the (salwick series) soils, the mean soil temperature revealed a statistically significant increase of about -1.1 to 10.9°C at 5 cm depth in 1976 compared to 2008. Similarly, soil temperature at 30 cm depth increased by -0.1 to 2.1°C in 2008 compared to 1976. Although, rapid increase in soil temperature at all depths was observed during that period, but a thorough assessment of these conditions suggested that the soil temperature at 5 cm depth are progressively increasing over time. A typical example of those increases in soil temperature was provided for agriculture where Miscanthus (elephant) plant that grows within the study area is adversely affected by the mean soil temperature increase. The study concluded that these observations contribute to the growing mass of evidence of global warming and knowledge on secular trends. Therefore, there was statistically significant increase in soil temperature at Compton Experimental Site between 1976-2008.

Keywords: soil temperature, warming trend, environment science, climate and atmospheric sciences

Procedia PDF Downloads 283
35598 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 225
35597 Modular Data and Calculation Framework for a Technology-based Mapping of the Manufacturing Process According to the Value Stream Management Approach

Authors: Tim Wollert, Fabian Behrendt

Abstract:

Value Stream Management (VSM) is a widely used methodology in the context of Lean Management for improving end-to-end material and information flows from a supplier to a customer from a company’s perspective. Whereas the design principles, e.g. Pull, value-adding, customer-orientation and further ones are still valid against the background of an increasing digitalized and dynamic environment, the methodology itself for mapping a value stream is characterized as time- and resource-intensive due to the high degree of manual activities. The digitalization of processes in the context of Industry 4.0 enables new opportunities to reduce these manual efforts and make the VSM approach more agile. The paper at hand aims at providing a modular data and calculation framework, utilizing the available business data, provided by information and communication technologies for automizing the value stream mapping process with focus on the manufacturing process.

Keywords: lean management 4.0, value stream management (VSM) 4.0, dynamic value stream mapping, enterprise resource planning (ERP)

Procedia PDF Downloads 128
35596 The Use of Classifiers in Image Analysis of Oil Wells Profiling Process and the Automatic Identification of Events

Authors: Jaqueline Maria Ribeiro Vieira

Abstract:

Different strategies and tools are available at the oil and gas industry for detecting and analyzing tension and possible fractures in borehole walls. Most of these techniques are based on manual observation of the captured borehole images. While this strategy may be possible and convenient with small images and few data, it may become difficult and suitable to errors when big databases of images must be treated. While the patterns may differ among the image area, depending on many characteristics (drilling strategy, rock components, rock strength, etc.). Previously we developed and proposed a novel strategy capable of detecting patterns at borehole images that may point to regions that have tension and breakout characteristics, based on segmented images. In this work we propose the inclusion of data-mining classification strategies in order to create a knowledge database of the segmented curves. These classifiers allow that, after some time using and manually pointing parts of borehole images that correspond to tension regions and breakout areas, the system will indicate and suggest automatically new candidate regions, with higher accuracy. We suggest the use of different classifiers methods, in order to achieve different knowledge data set configurations.

Keywords: image segmentation, oil well visualization, classifiers, data-mining, visual computer

Procedia PDF Downloads 279
35595 Hydrological Analysis for Urban Water Management

Authors: Ranjit Kumar Sahu, Ramakar Jha

Abstract:

Urban Water Management is the practice of managing freshwater, waste water, and storm water as components of a basin-wide management plan. It builds on existing water supply and sanitation considerations within an urban settlement by incorporating urban water management within the scope of the entire river basin. The pervasive problems generated by urban development have prompted, in the present work, to study the spatial extent of urbanization in Golden Triangle of Odisha connecting the cities Bhubaneswar (20.2700° N, 85.8400° E), Puri (19.8106° N, 85.8314° E) and Konark (19.9000° N, 86.1200° E)., and patterns of periodic changes in urban development (systematic/random) in order to develop future plans for (i) urbanization promotion areas, and (ii) urbanization control areas. Remote Sensing, using USGS (U.S. Geological Survey) Landsat8 maps, supervised classification of the Urban Sprawl has been done for during 1980 - 2014, specifically after 2000. This Work presents the following: (i) Time series analysis of Hydrological data (ground water and rainfall), (ii) Application of SWMM (Storm Water Management Model) and other soft computing techniques for Urban Water Management, and (iii) Uncertainty analysis of model parameters (Urban Sprawl and correlation analysis). The outcome of the study shows drastic growth results in urbanization and depletion of ground water levels in the area that has been discussed briefly. Other relative outcomes like declining trend of rainfall and rise of sand mining in local vicinity has been also discussed. Research on this kind of work will (i) improve water supply and consumption efficiency (ii) Upgrade drinking water quality and waste water treatment (iii) Increase economic efficiency of services to sustain operations and investments for water, waste water, and storm water management, and (iv) engage communities to reflect their needs and knowledge for water management.

Keywords: Storm Water Management Model (SWMM), uncertainty analysis, urban sprawl, land use change

Procedia PDF Downloads 410
35594 The Role of Data Protection Officer in Managing Individual Data: Issues and Challenges

Authors: Nazura Abdul Manap, Siti Nur Farah Atiqah Salleh

Abstract:

For decades, the misuse of personal data has been a critical issue. Malaysia has accepted responsibility by implementing the Malaysian Personal Data Protection Act 2010 to secure personal data (PDPA 2010). After more than a decade, this legislation is set to be revised by the current PDPA 2023 Amendment Bill to align with the world's key personal data protection regulations, such as the European Union General Data Protection Regulations (GDPR). Among the other suggested adjustments is the Data User's appointment of a Data Protection Officer (DPO) to ensure the commercial entity's compliance with the PDPA 2010 criteria. The change is expected to be enacted in parliament fairly soon; nevertheless, based on the experience of the Personal Data Protection Department (PDPD) in implementing the Act, it is projected that there will be a slew of additional concerns associated with the DPO mandate. Consequently, the goal of this article is to highlight the issues that the DPO will encounter and how the Personal Data Protection Department should respond to this subject. The study result was produced using a qualitative technique based on an examination of the current literature. This research reveals that there are probable obstacles experienced by the DPO, and thus, there should be a definite, clear guideline in place to aid DPO in executing their tasks. It is argued that appointing a DPO is a wise measure in ensuring that the legal data security requirements are met.

Keywords: guideline, law, data protection officer, personal data

Procedia PDF Downloads 60
35593 Concept, Design and Implementation of Power System Component Simulator Based on Thyristor Controlled Transformer and Power Converter

Authors: B. Kędra, R. Małkowski

Abstract:

This paper presents information on Power System Component Simulator – a device designed for LINTE^2 laboratory owned by Gdansk University of Technology in Poland. In this paper, we first provide an introductory information on the Power System Component Simulator and its capabilities. Then, the concept of the unit is presented. Requirements for the unit are described as well as proposed and introduced functions are listed. Implementation details are given. Hardware structure is presented and described. Information about used communication interface, data maintenance and storage solution, as well as used Simulink real-time features are presented. List and description of all measurements is provided. Potential of laboratory setup modifications is evaluated. Lastly, the results of experiments performed using Power System Component Simulator are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area.

Keywords: power converter, Simulink Real-Time, Matlab, load, tap controller

Procedia PDF Downloads 227
35592 Feasibility Study of MongoDB and Radio Frequency Identification Technology in Asset Tracking System

Authors: Mohd Noah A. Rahman, Afzaal H. Seyal, Sharul T. Tajuddin, Hartiny Md Azmi

Abstract:

Taking into consideration the real time situation specifically the higher academic institutions, small, medium to large companies, public to private sectors and the remaining sectors, do experience the inventory or asset shrinkages due to theft, loss or even inventory tracking errors. This happening is due to a zero or poor security systems and measures being taken and implemented in their organizations. Henceforth, implementing the Radio Frequency Identification (RFID) technology into any manual or existing web-based system or web application can simply deter and will eventually solve certain major issues to serve better data retrieval and data access. Having said, this manual or existing system can be enhanced into a mobile-based system or application. In addition to that, the availability of internet connections can aid better services of the system. Such involvement of various technologies resulting various privileges to individuals or organizations in terms of accessibility, availability, mobility, efficiency, effectiveness, real-time information and also security. This paper will look deeper into the integration of mobile devices with RFID technologies with the purpose of asset tracking and control. Next, it is to be followed by the development and utilization of MongoDB as the main database to store data and its association with RFID technology. Finally, the development of a web based system which can be viewed in a mobile based formation with the aid of Hypertext Preprocessor (PHP), MongoDB, Hyper-Text Markup Language 5 (HTML5), Android, JavaScript and AJAX programming language.

Keywords: RFID, asset tracking system, MongoDB, NoSQL

Procedia PDF Downloads 284
35591 Volatility Switching between Two Regimes

Authors: Josip Visković, Josip Arnerić, Ante Rozga

Abstract:

Based on the fact that volatility is time varying in high frequency data and that periods of high volatility tend to cluster, the most successful and popular models in modelling time varying volatility are GARCH type models. When financial returns exhibit sudden jumps that are due to structural breaks, standard GARCH models show high volatility persistence, i.e. integrated behaviour of the conditional variance. In such situations models in which the parameters are allowed to change over time are more appropriate. This paper compares different GARCH models in terms of their ability to describe structural changes in returns caused by financial crisis at stock markets of six selected central and east European countries. The empirical analysis demonstrates that Markov regime switching GARCH model resolves the problem of excessive persistence and outperforms uni-regime GARCH models in forecasting volatility when sudden switching occurs in response to financial crisis.

Keywords: central and east European countries, financial crisis, Markov switching GARCH model, transition probabilities

Procedia PDF Downloads 210