Search results for: information seeking models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17019

Search results for: information seeking models

15039 A PROMETHEE-BELIEF Approach for Multi-Criteria Decision Making Problems with Incomplete Information

Authors: H. Moalla, A. Frikha

Abstract:

Multi-criteria decision aid methods consider decision problems where numerous alternatives are evaluated on several criteria. These methods are used to deal with perfect information. However, in practice, it is obvious that this information requirement is too much strict. In fact, the imperfect data provided by more or less reliable decision makers usually affect decision results since any decision is closely linked to the quality and availability of information. In this paper, a PROMETHEE-BELIEF approach is proposed to help multi-criteria decisions based on incomplete information. This approach solves problems with incomplete decision matrix and unknown weights within PROMETHEE method. On the base of belief function theory, our approach first determines the distributions of belief masses based on PROMETHEE’s net flows and then calculates weights. Subsequently, it aggregates the distribution masses associated to each criterion using Murphy’s modified combination rule in order to infer a global belief structure. The final action ranking is obtained via pignistic probability transformation. A case study of real-world application concerning the location of a waste treatment center from healthcare activities with infectious risk in the center of Tunisia is studied to illustrate the detailed process of the BELIEF-PROMETHEE approach.

Keywords: belief function theory, incomplete information, multiple criteria analysis, PROMETHEE method

Procedia PDF Downloads 167
15038 Multimodal Characterization of Emotion within Multimedia Space

Authors: Dayo Samuel Banjo, Connice Trimmingham, Niloofar Yousefi, Nitin Agarwal

Abstract:

Technological advancement and its omnipresent connection have pushed humans past the boundaries and limitations of a computer screen, physical state, or geographical location. It has provided a depth of avenues that facilitate human-computer interaction that was once inconceivable such as audio and body language detection. Given the complex modularities of emotions, it becomes vital to study human-computer interaction, as it is the commencement of a thorough understanding of the emotional state of users and, in the context of social networks, the producers of multimodal information. This study first acknowledges the accuracy of classification found within multimodal emotion detection systems compared to unimodal solutions. Second, it explores the characterization of multimedia content produced based on their emotions and the coherence of emotion in different modalities by utilizing deep learning models to classify emotion across different modalities.

Keywords: affective computing, deep learning, emotion recognition, multimodal

Procedia PDF Downloads 158
15037 Single Cell Rna Sequencing Operating from Benchside to Bedside: An Interesting Entry into Translational Genomics

Authors: Leo Nnamdi Ozurumba-Dwight

Abstract:

Single-cell genomic analytical systems have proved to be a platform to isolate bulk cells into selected single cells for genomic, proteomic, and related metabolomic studies. This is enabling systematic investigations of the level of heterogeneity in a diverse and wide pool of cell populations. Single cell technologies, embracing techniques such as high parameter flow cytometry, single-cell sequencing, and high-resolution images are playing vital roles in these investigations on messenger ribonucleic acid (mRNA) molecules and related gene expressions in tracking the nature and course of disease conditions. This entails targeted molecular investigations on unit cells that help us understand cell behavoiur and expressions, which can be examined for their health implications on the health state of patients. One of the vital good sides of single-cell RNA sequencing (scRNA seq) is its probing capacity to detect deranged or abnormal cell populations present within homogenously perceived pooled cells, which would have evaded cursory screening on the pooled cell populations of biological samples obtained as part of diagnostic procedures. Despite conduction of just single-cell transcriptome analysis, scRNAseq now permits comparison of the transcriptome of the individual cells, which can be evaluated for gene expressional patterns that depict areas of heterogeneity with pharmaceutical drug discovery and clinical treatment applications. It is vital to strictly work through the tools of investigations from wet lab to bioinformatics and computational tooled analyses. In the precise steps for scRNAseq, it is critical to do thorough and effective isolation of viable single cells from the tissues of interest using dependable techniques (such as FACS) before proceeding to lysis, as this enhances the appropriate picking of quality mRNA molecules for subsequent sequencing (such as by the use of Polymerase Chain Reaction machine). Interestingly, scRNAseq can be deployed to analyze various types of biological samples such as embryos, nervous systems, tumour cells, stem cells, lymphocytes, and haematopoietic cells. In haematopoietic cells, it can be used to stratify acute myeloid leukemia patterns in patients, sorting them out into cohorts that enable re-modeling of treatment regimens based on stratified presentations. In immunotherapy, it can furnish specialist clinician-immunologist with tools to re-model treatment for each patient, an attribute of precision medicine. Finally, the good predictive attribute of scRNAseq can help reduce the cost of treatment for patients, thus attracting more patients who would have otherwise been discouraged from seeking quality clinical consultation help due to perceived high cost. This is a positive paradigm shift for patients’ attitudes primed towards seeking treatment.

Keywords: immunotherapy, transcriptome, re-modeling, mRNA, scRNA-seq

Procedia PDF Downloads 176
15036 A Multi-Release Software Reliability Growth Models Incorporating Imperfect Debugging and Change-Point under the Simulated Testing Environment and Software Release Time

Authors: Sujit Kumar Pradhan, Anil Kumar, Vijay Kumar

Abstract:

The testing process of the software during the software development time is a crucial step as it makes the software more efficient and dependable. To estimate software’s reliability through the mean value function, many software reliability growth models (SRGMs) were developed under the assumption that operating and testing environments are the same. Practically, it is not true because when the software works in a natural field environment, the reliability of the software differs. This article discussed an SRGM comprising change-point and imperfect debugging in a simulated testing environment. Later on, we extended it in a multi-release direction. Initially, the software was released to the market with few features. According to the market’s demand, the software company upgraded the current version by adding new features as time passed. Therefore, we have proposed a generalized multi-release SRGM where change-point and imperfect debugging concepts have been addressed in a simulated testing environment. The failure-increasing rate concept has been adopted to determine the change point for each software release. Based on nine goodness-of-fit criteria, the proposed model is validated on two real datasets. The results demonstrate that the proposed model fits the datasets better. We have also discussed the optimal release time of the software through a cost model by assuming that the testing and debugging costs are time-dependent.

Keywords: software reliability growth models, non-homogeneous Poisson process, multi-release software, mean value function, change-point, environmental factors

Procedia PDF Downloads 74
15035 Financial Information Transparency on Investor Behavior in the Private Company in Dusit Area

Authors: Yosapon Kidsuntad

Abstract:

The purpose of this dissertation was to explore the relationship between financial transparency and investor behavior. In carrying out this inquiry, the researcher used a questionnaire was utilized as a tool to collect data. Statistics utilized in this research included frequency, percentage, mean, standard deviation, and multiple regression analysis. The results revealed that there are significant differences investor perceptions of the different dimensions of financial information transparency. These differences correspond to demographical variables with the exception of the educational level variable. It was also found that there are relationships between investor perceptions of the dimensions of financial information transparency and investor behavior in the private company in Dusit Area. Finally, the researcher also found that there are differences in investor behavior corresponding to different categories of investor experience.

Keywords: financial information transparency, investor behavior, private company, Dusit Area

Procedia PDF Downloads 331
15034 A Vehicle Monitoring System Based on the LoRa Technique

Authors: Chao-Linag Hsieh, Zheng-Wei Ye, Chen-Kang Huang, Yeun-Chung Lee, Chih-Hong Sun, Tzai-Hung Wen, Jehn-Yih Juang, Joe-Air Jiang

Abstract:

Air pollution and climate warming become more and more intensified in many areas, especially in urban areas. Environmental parameters are critical information to air pollution and weather monitoring. Thus, it is necessary to develop a suitable air pollution and weather monitoring system for urban areas. In this study, a vehicle monitoring system (VMS) based on the IoT technique is developed. Cars are selected as the research tool because it can reach a greater number of streets to collect data. The VMS can monitor different environmental parameters, including ambient temperature and humidity, and air quality parameters, including PM2.5, NO2, CO, and O3. The VMS can provide other information, including GPS signals and the vibration information through driving a car on the street. Different sensor modules are used to measure the parameters and collect the measured data and transmit them to a cloud server through the LoRa protocol. A user interface is used to show the sensing data storing at the cloud server. To examine the performance of the system, a researcher drove a Nissan x-trail 1998 to the area close to the Da’an District office in Taipei to collect monitoring data. The collected data are instantly shown on the user interface. The four kinds of information are provided by the interface: GPS positions, weather parameters, vehicle information, and air quality information. With the VMS, users can obtain the information regarding air quality and weather conditions when they drive their car to an urban area. Also, government agencies can make decisions on traffic planning based on the information provided by the proposed VMS.

Keywords: LoRa, monitoring system, smart city, vehicle

Procedia PDF Downloads 418
15033 Non-Linear Assessment of Chromatographic Lipophilicity and Model Ranking of Newly Synthesized Steroid Derivatives

Authors: Milica Karadzic, Lidija Jevric, Sanja Podunavac-Kuzmanovic, Strahinja Kovacevic, Anamarija Mandic, Katarina Penov Gasi, Marija Sakac, Aleksandar Okljesa, Andrea Nikolic

Abstract:

The present paper deals with chromatographic lipophilicity prediction of newly synthesized steroid derivatives. The prediction was achieved using in silico generated molecular descriptors and quantitative structure-retention relationship (QSRR) methodology with the artificial neural networks (ANN) approach. Chromatographic lipophilicity of the investigated compounds was expressed as retention factor value logk. For QSRR modeling, a feedforward back-propagation ANN with gradient descent learning algorithm was applied. Using the novel sum of ranking differences (SRD) method generated ANN models were ranked. The aim was to distinguish the most consistent QSRR model that can be found, and similarity or dissimilarity between the models that could be noticed. In this study, SRD was performed with average values of retention factor value logk as reference values. An excellent correlation between experimentally observed retention factor value logk and values predicted by the ANN was obtained with a correlation coefficient higher than 0.9890. Statistical results show that the established ANN models can be applied for required purpose. This article is based upon work from COST Action (TD1305), supported by COST (European Cooperation in Science and Technology).

Keywords: artificial neural networks, liquid chromatography, molecular descriptors, steroids, sum of ranking differences

Procedia PDF Downloads 319
15032 Machine Learning Techniques in Seismic Risk Assessment of Structures

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.

Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine

Procedia PDF Downloads 106
15031 Neural Style Transfer Using Deep Learning

Authors: Shaik Jilani Basha, Inavolu Avinash, Alla Venu Sai Reddy, Bitragunta Taraka Ramu

Abstract:

We can use the neural style transfer technique to build a picture with the same "content" as the beginning image but the "style" of the picture we've chosen. Neural style transfer is a technique for merging the style of one image into another while retaining its original information. The only change is how the image is formatted to give it an additional artistic sense. The content image depicts the plan or drawing, as well as the colors of the drawing or paintings used to portray the style. It is a computer vision programme that learns and processes images through deep convolutional neural networks. To implement software, we used to train deep learning models with the train data, and whenever a user takes an image and a styled image, the output will be as the style gets transferred to the original image, and it will be shown as the output.

Keywords: neural networks, computer vision, deep learning, convolutional neural networks

Procedia PDF Downloads 95
15030 Equilibrium and Kinetic Studies of Lead Adsorption on Activated Carbon Derived from Mangrove Propagule Waste by Phosphoric Acid Activation

Authors: Widi Astuti, Rizki Agus Hermawan, Hariono Mukti, Nurul Retno Sugiyono

Abstract:

The removal of lead ion (Pb2+) from aqueous solution by activated carbon with phosphoric acid activation employing mangrove propagule as precursor was investigated in a batch adsorption system. Batch studies were carried out to address various experimental parameters including pH and contact time. The Langmuir and Freundlich models were able to describe the adsorption equilibrium, while the pseudo first order and pseudo second order models were used to describe kinetic process of Pb2+ adsorption. The results show that the adsorption data are seen in accordance with Langmuir isotherm model and pseudo-second order kinetic model.

Keywords: activated carbon, adsorption, equilibrium, kinetic, lead, mangrove propagule

Procedia PDF Downloads 167
15029 Housing Delivery in Nigeria: Repackaging for Sustainable Development

Authors: Funmilayo L. Amao, Amos O. Amao

Abstract:

It has been observed that majority of the people are living in poor housing quality or totally homeless in urban center despite all governmental policies to provide housing to the public. On the supply side, various government policies in the past have been formulated towards overcoming the huge shortage through several Housing Reform Programmes. Despite these past efforts, housing continues to be a mirage to ordinary Nigerian. Currently, there are various mass housing delivery programmes such as the affordable housing scheme that utilize the Public Private Partnership effort and several Private Finance Initiative models could only provide for about 3% of the required stock. This suggests the need for a holistic solution in approaching the problem. The aim of this research is to find out the problems hindering the delivery of housing in Nigeria and its effects on housing affordability. The specific objectives are to identify the causes of housing delivery problems, to examine different housing policies over years and to suggest a way out for sustainable housing delivery. This paper also reviews the past and current housing delivery programmes in Nigeria and analyses the demand and supply side issues. It identifies the various housing delivery mechanisms in current practice. The objective of this paper, therefore, is to give you an insight into the delivery option for the sustainability of housing in Nigeria, given the existing delivery structures and the framework specified in the New National Housing Policy. The secondary data were obtained from books, journals and seminar papers. The conclusion is that we cannot copy models from other nations, but should rather evolve workable models based on our socio-cultural background to address the huge housing shortage in Nigeria. Recommendations are made in this regard.

Keywords: housing, sustainability, housing delivery, housing policy, housing affordability

Procedia PDF Downloads 296
15028 Evaluation of Published Materials in Meeting the Information Needs of Students in Three Selected College Libraries in Oyo State, Nigeria

Authors: Rafiat Olasumbo Akande

Abstract:

Most college libraries in Oyo State show signs of unhealthy collection practices like the preponderance of non-recent collections and indiscriminate acquisition of sub-standard books from hawkers. The objective of this study, therefore, is to determine the extent at which available published materials in those college libraries are able to meet both knowledge and information needs of students in those institutions. A descriptive survey was conducted among 18 librarians and 21 library officers in three colleges purposively selected for the exercise using simple sampling technique. In all, 279 questionnaires were administered and out of those 279 administered, 265 were returned and analyzed using Statistical Package for Social Science (SPSS). Three College Librarian were also interviewed. Findings from the study showed that due to the paucity of funds, obsolete materials, and sub-standard materials being procured from roadside book hawkers hinders the college libraries in meeting the information needs of the students in these college libraries. It then concluded that only when there is standard procedure for collection management and acquisition of library materials that the knowledge and information needs of the students could be met. The study recommends that students and curriculum review committee members from various departments should always be involved in determining materials needed by the library to meet students information needs and that institution authority must fund, monitor and ensure compliance with the acquisition policy in place in the college libraries.

Keywords: libraries, published materials, information needs, college, evaluation, students

Procedia PDF Downloads 167
15027 The Impact of Information and Communication Technology on the Re-Engineering Process of Small and Medium Enterprises

Authors: Hiba Mezaache

Abstract:

The current study aimed to know the impact of using information and communication technology on the process of re-engineering small and medium enterprises, as the world witnessed the speed development of the latter in its field of work and the diversity of its objectives and programs, that also made its process important for the growth and development of the institution and also gaining the flexibility to face the changes that may occur in the environment of work, so in order to know the impact of information and communication technology on the success of this process, we prepared an electronic questionnaire that included (70) items, and we also used the SPSS statistical calendar to analyze the data obtained. In the end of our study, our conclusion was that there was a positive correlation between the four dimensions of information and communication technology, i.e., hardware and equipment, software, communication networks, databases, and the re-engineering process, in addition to the fact that the studied institutions attach great importance to formal communication, for its positive advantages that it achieves in reducing time and effort and costs in performing the business. We could also say that communication technology contributes to the process of formulating objectives related to the re-engineering strategy. Finally, we recommend the necessity of empowering workers to use information technology and communication more in enterprises, and to integrate them more into the activity of the enterprise by involving them in the decision-making process, and also to keep pace with the development in the field of software, hardware, and technological equipment.

Keywords: information and communication technology, re-engineering, small and medium enterprises, the impact

Procedia PDF Downloads 179
15026 Implementation of Lean Production in Business Enterprises: A Literature-Based Content Analysis of Implementation Procedures

Authors: P. Pötters, A. Marquet, B. Leyendecker

Abstract:

The objective of this paper is to investigate different implementation approaches for the implementation of Lean production in companies. Furthermore, a structured overview of those different approaches is to be made. Therefore, the present work is intended to answer the following research question: What differences and similarities exist between the various systematic approaches and phase models for the implementation of Lean Production? To present various approaches for the implementation of Lean Production discussed in the literature, a qualitative content analysis was conducted. Within the framework of a qualitative survey, a selection of texts dealing with lean production and its introduction was examined. The analysis presents different implementation approaches from the literature, covering the descriptive aspect of the study. The study also provides insights into similarities and differences among the implementation approaches, which are drawn from the analysis of latent text contents and author interpretations. In this study, the focus is on identifying differences and similarities among systemic approaches for implementing Lean Production. The research question takes into account the main object of consideration, objectives pursued, starting point, procedure, and endpoint of the implementation approach. The study defines the concept of Lean Production and presents various approaches described in literature that companies can use to implement Lean Production successfully. The study distinguishes between five systemic implementation approaches and seven phase models to help companies choose the most suitable approach for their implementation project. The findings of this study can contribute to enhancing transparency regarding the existing approaches for implementing Lean Production. This can enable companies to compare and contrast the available implementation approaches and choose the most suitable one for their specific project.

Keywords: implementation, lean production, phase models, systematic approaches

Procedia PDF Downloads 104
15025 Study of Evaluation Model Based on Information System Success Model and Flow Theory Using Web-scale Discovery System

Authors: June-Jei Kuo, Yi-Chuan Hsieh

Abstract:

Because of the rapid growth of information technology, more and more libraries introduce the new information retrieval systems to enhance the users’ experience, improve the retrieval efficiency, and increase the applicability of the library resources. Nevertheless, few of them are discussed the usability from the users’ aspect. The aims of this study are to understand that the scenario of the information retrieval system utilization, and to know why users are willing to continuously use the web-scale discovery system to improve the web-scale discovery system and promote their use of university libraries. Besides of questionnaires, observations and interviews, this study employs both Information System Success Model introduced by DeLone and McLean in 2003 and the flow theory to evaluate the system quality, information quality, service quality, use, user satisfaction, flow, and continuing to use web-scale discovery system of students from National Chung Hsing University. Then, the results are analyzed through descriptive statistics and structural equation modeling using AMOS. The results reveal that in web-scale discovery system, the user’s evaluation of system quality, information quality, and service quality is positively related to the use and satisfaction; however, the service quality only affects user satisfaction. User satisfaction and the flow show a significant impact on continuing to use. Moreover, user satisfaction has a significant impact on user flow. According to the results of this study, to maintain the stability of the information retrieval system, to improve the information content quality, and to enhance the relationship between subject librarians and students are recommended for the academic libraries. Meanwhile, to improve the system user interface, to minimize layer from system-level, to strengthen the data accuracy and relevance, to modify the sorting criteria of the data, and to support the auto-correct function are required for system provider. Finally, to establish better communication with librariana commended for all users.

Keywords: web-scale discovery system, discovery system, information system success model, flow theory, academic library

Procedia PDF Downloads 103
15024 Integrating Radar Sensors with an Autonomous Vehicle Simulator for an Enhanced Smart Parking Management System

Authors: Mohamed Gazzeh, Bradley Null, Fethi Tlili, Hichem Besbes

Abstract:

The burgeoning global ownership of personal vehicles has posed a significant strain on urban infrastructure, notably parking facilities, leading to traffic congestion and environmental concerns. Effective parking management systems (PMS) are indispensable for optimizing urban traffic flow and reducing emissions. The most commonly deployed systems nowadays rely on computer vision technology. This paper explores the integration of radar sensors and simulation in the context of smart parking management. We concentrate on radar sensors due to their versatility and utility in automotive applications, which extends to PMS. Additionally, radar sensors play a crucial role in driver assistance systems and autonomous vehicle development. However, the resource-intensive nature of radar data collection for algorithm development and testing necessitates innovative solutions. Simulation, particularly the monoDrive simulator, an internal development tool used by NI the Test and Measurement division of Emerson, offers a practical means to overcome this challenge. The primary objectives of this study encompass simulating radar sensors to generate a substantial dataset for algorithm development, testing, and, critically, assessing the transferability of models between simulated and real radar data. We focus on occupancy detection in parking as a practical use case, categorizing each parking space as vacant or occupied. The simulation approach using monoDrive enables algorithm validation and reliability assessment for virtual radar sensors. It meticulously designed various parking scenarios, involving manual measurements of parking spot coordinates, orientations, and the utilization of TI AWR1843 radar. To create a diverse dataset, we generated 4950 scenarios, comprising a total of 455,400 parking spots. This extensive dataset encompasses radar configuration details, ground truth occupancy information, radar detections, and associated object attributes such as range, azimuth, elevation, radar cross-section, and velocity data. The paper also addresses the intricacies and challenges of real-world radar data collection, highlighting the advantages of simulation in producing radar data for parking lot applications. We developed classification models based on Support Vector Machines (SVM) and Density-Based Spatial Clustering of Applications with Noise (DBSCAN), exclusively trained and evaluated on simulated data. Subsequently, we applied these models to real-world data, comparing their performance against the monoDrive dataset. The study demonstrates the feasibility of transferring models from a simulated environment to real-world applications, achieving an impressive accuracy score of 92% using only one radar sensor. This finding underscores the potential of radar sensors and simulation in the development of smart parking management systems, offering significant benefits for improving urban mobility and reducing environmental impact. The integration of radar sensors and simulation represents a promising avenue for enhancing smart parking management systems, addressing the challenges posed by the exponential growth in personal vehicle ownership. This research contributes valuable insights into the practicality of using simulated radar data in real-world applications and underscores the role of radar technology in advancing urban sustainability.

Keywords: autonomous vehicle simulator, FMCW radar sensors, occupancy detection, smart parking management, transferability of models

Procedia PDF Downloads 81
15023 Validation and Fit of a Biomechanical Bipedal Walking Model for Simulation of Loads Induced by Pedestrians on Footbridges

Authors: Dianelys Vega, Carlos Magluta, Ney Roitman

Abstract:

The simulation of loads induced by walking people in civil engineering structures is still challenging It has been the focus of considerable research worldwide in the recent decades due to increasing number of reported vibration problems in pedestrian structures. One of the most important key in the designing of slender structures is the Human-Structure Interaction (HSI). How moving people interact with structures and the effect it has on their dynamic responses is still not well understood. To rely on calibrated pedestrian models that accurately estimate the structural response becomes extremely important. However, because of the complexity of the pedestrian mechanisms, there are still some gaps in knowledge and more reliable models need to be investigated. On this topic several authors have proposed biodynamic models to represent the pedestrian, whether these models provide a consistent approximation to physical reality still needs to be studied. Therefore, this work comes to contribute to a better understanding of this phenomenon bringing an experimental validation of a pedestrian walking model and a Human-Structure Interaction model. In this study, a bi-dimensional bipedal walking model was used to represent the pedestrians along with an interaction model which was applied to a prototype footbridge. Numerical models were implemented in MATLAB. In parallel, experimental tests were conducted in the Structures Laboratory of COPPE (LabEst), at Federal University of Rio de Janeiro. Different test subjects were asked to walk at different walking speeds over instrumented force platforms to measure the walking force and an accelerometer was placed at the waist of each subject to measure the acceleration of the center of mass at the same time. By fitting the step force and the center of mass acceleration through successive numerical simulations, the model parameters are estimated. In addition, experimental data of a walking pedestrian on a flexible structure was used to validate the interaction model presented, through the comparison of the measured and simulated structural response at mid span. It was found that the pedestrian model was able to adequately reproduce the ground reaction force and the center of mass acceleration for normal and slow walking speeds, being less efficient for faster speeds. Numerical simulations showed that biomechanical parameters such as leg stiffness and damping affect the ground reaction force, and the higher the walking speed the greater the leg length of the model. Besides, the interaction model was also capable to estimate with good approximation the structural response, that remained in the same order of magnitude as the measured response. Some differences in frequency spectra were observed, which are presumed to be due to the perfectly periodic loading representation, neglecting intra-subject variabilities. In conclusion, this work showed that the bipedal walking model could be used to represent walking pedestrians since it was efficient to reproduce the center of mass movement and ground reaction forces produced by humans. Furthermore, although more experimental validations are required, the interaction model also seems to be a useful framework to estimate the dynamic response of structures under loads induced by walking pedestrians.

Keywords: biodynamic models, bipedal walking models, human induced loads, human structure interaction

Procedia PDF Downloads 132
15022 Enhanced Iceberg Information Dissemination for Public and Autonomous Maritime Use

Authors: Ronald Mraz, Gary C. Kessler, Ethan Gold, John G. Cline

Abstract:

The International Ice Patrol (IIP) continually monitors iceberg activity in the North Atlantic by direct observation using ships, aircraft, and satellite imagery. Daily reports detailing navigational boundaries of icebergs have significantly reduced the risk of iceberg contact. What is currently lacking is formatting this data for automatic transmission and display of iceberg navigational boundaries in commercial navigation equipment. This paper describes the methodology and implementation of a system to format iceberg limit information for dissemination through existing radio network communications. This information will then automatically display on commercial navigation equipment. Additionally, this information is reformatted for Google Earth rendering of iceberg track line limits. Having iceberg limit information automatically available in standard navigation equipment will help support full autonomous operation of sailing vessels.

Keywords: iceberg, iceberg risk, iceberg track lines, AIS messaging, international ice patrol, North American ice service, google earth, autonomous surface vessels

Procedia PDF Downloads 137
15021 The Challenges of Citizen Engagement in Urban Transformation: Key Learnings from Three European Cities

Authors: Idoia Landa Oregi, Itsaso Gonzalez Ochoantesana, Olatz Nicolas Buxens, Carlo Ferretti

Abstract:

The impact of citizens in urban transformations has become increasingly important in the pursuit of creating citizen-centered cities. Citizens at the forefront of the urban transformation process are key to establishing resilient, sustainable, and inclusive cities that cater to the needs of all residents. Therefore, collecting data and information directly from citizens is crucial for the sustainable development of cities. Within this context, public participation becomes a pillar for acquiring the necessary information from citizens. Public participation in urban transformation processes establishes a more responsive, equitable, and resilient urban environment. This approach cultivates a sense of shared responsibility and collective progress in building cities that truly serve the well-being of all residents. However, the implementation of public participation practices often overlooks strategies to effectively engage citizens in the processes, resulting in non-successful participatory outcomes. Therefore, this research focuses on identifying and analyzing the critical aspects of citizen engagement during the same participatory urban transformation process in different European contexts: Ermua (Spain), Elva (Estonia) and Matera (Italy). The participatory neighborhood regeneration process is divided into three main stages, to turn social districts into inclusive and smart neighborhoods: (i) the strategic level, (ii) the design level, and (iii) the implementation level. In the initial stage, the focus is on diagnosing the neighborhood and creating a shared vision with the community. The second stage centers around collaboratively designing various action plans to foster inclusivity and intelligence while pushing local economic development within the district. Finally, the third stage ensures the proper co-implementation of the designed actions in the neighborhood. To this date, the presented results critically analyze the key aspects of engagement in the first stage of the methodology, the strategic plan, in the three above-mentioned contexts. It is a multifaceted study that incorporates three case studies to shed light on the various perspectives and strategies adopted by each city. The results indicate that despite of the various cultural contexts, all cities face similar barriers when seeking to enhance engagement. Accordingly, the study identifies specific challenges within the participatory approach across the three cities such as the existence of discontented citizens, communication gaps, inconsistent participation, or administration resistance. Consequently, key learnings of the process indicate that a collaborative sphere needs to be cultivated, educating both citizens and administrations in the aspects of co-governance, giving these practices the appropriate space and their own communication channels. This study is part of the DROP project, funded by the European Union, which aims to develop a citizen-centered urban renewal methodology to transform the social districts into smart and inclusive neighborhoods.

Keywords: citizen-centred cities, engagement, public participation, urban transformation

Procedia PDF Downloads 68
15020 Research on Residential Block Fabric: A Case Study of Hangzhou West Area

Authors: Wang Ye, Wei Wei

Abstract:

Residential block construction of big cities in China began in the 1950s, and four models had far-reaching influence on modern residential block in its development process, including unit compound and residential district in 1950s to 1980s, and gated community and open community in 1990s to now. Based on analysis of the four models’ fabric, the article takes residential blocks in Hangzhou west area as an example and carries on the studies from urban structure level and block special level, mainly including urban road network, land use, community function, road organization, public space and building fabric. At last, the article puts forward semi-open sub-community strategy to improve the current fabric.

Keywords: Hangzhou west area, residential block model, residential block fabric, semi-open sub-community strategy

Procedia PDF Downloads 417
15019 Differences in Parental Acceptance, Rejection, and Attachment and Associations with Adolescent Emotional Intelligence and Life Satisfaction

Authors: Diana Coyl-Shepherd, Lisa Newland

Abstract:

Research and theory suggest that parenting and parent-child attachment influence emotional development and well-being. Studies indicate that adolescents often describe differences in relationships with each parent and may form different types of attachment to mothers and fathers. During adolescence and young adulthood, romantic partners may also become attachment figures, influencing well being, and providing a relational context for emotion skill development. Mothers, however, tend to be remain the primary attachment figure; fathers and romantic partners are more likely to be secondary attachment figures. The following hypotheses were tested: 1) participants would rate mothers as more accepting and less rejecting than fathers, 2) participants would rate secure attachment to mothers higher and insecure attachment lower compared to father and romantic partner, 3) parental rejection and insecure attachment would be negatively related to life satisfaction and emotional intelligence, and 4) secure attachment and parental acceptance would be positively related life satisfaction and emotional intelligence. After IRB and informed consent, one hundred fifty adolescents and young adults (ages 11-28, M = 19.64; 71% female) completed an online survey. Measures included parental acceptance, rejection, attachment (i.e., secure, dismissing, and preoccupied), emotional intelligence (i.e., seeking and providing comfort, use, and understanding of self emotions, expressing warmth, understanding and responding to others’ emotional needs), and well-being (i.e., self-confidence and life satisfaction). As hypothesized, compared to fathers’, mothers’ acceptance was significantly higher t (190) = 3.98, p = .000 and rejection significantly lower t (190) = - 4.40, p = .000. Group differences in secure attachment were significant, f (2, 389) = 40.24, p = .000; post-hoc analyses revealed significant differences between mothers and fathers and between mothers and romantic partners; mothers had the highest mean score. Group differences in preoccupied attachment were significant, f (2, 388) = 13.37, p = .000; post-hoc analyses revealed significant differences between mothers and romantic partners, and between fathers and romantic partners; mothers have the lowest mean score. However, group differences in dismissing attachment were not significant, f (2, 389) = 1.21, p = .30; scores for mothers and romantic partners were similar; father means score was highest. For hypotheses 3 and 4 significant negative correlations were found between life satisfaction and dismissing parent, and romantic attachment, preoccupied father and romantic attachment, and mother and father rejection variables; secure attachment variables and parental acceptance were positively correlated with life satisfaction. Self-confidence was correlated only with mother acceptance. For emotional intelligence, seeking and providing comfort were negatively correlated with parent dismissing and mother rejection; secure mother and romantic attachment and mother acceptance were positively correlated with these variables. Use and understanding of self-emotions were negatively correlated with parent and partner dismissing attachment, and parent rejection; romantic secure attachment and parent acceptance were positively correlated. Expressing warmth was negatively correlated with dismissing attachment variables, romantic preoccupied attachment, and parent rejection; whereas attachment secure variables were positively associated. Understanding and responding to others’ emotional needs were correlated with parent dismissing and preoccupied attachment variables and mother rejection; only secure father attachment was positively correlated.

Keywords: adolescent emotional intelligence, life satisfaction, parent and romantic attachment, parental rejection and acceptance

Procedia PDF Downloads 192
15018 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier

Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh

Abstract:

This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.

Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems

Procedia PDF Downloads 46
15017 Debriefing Practices and Models: An Integrative Review

Authors: Judson P. LaGrone

Abstract:

Simulation-based education in curricula was once a luxurious component of nursing programs but now serves as a vital element of an individual’s learning experience. A debriefing occurs after the simulation scenario or clinical experience is completed to allow the instructor(s) or trained professional(s) to act as a debriefer to guide a reflection with a purpose of acknowledging, assessing, and synthesizing the thought process, decision-making process, and actions/behaviors performed during the scenario or clinical experience. Debriefing is a vital component of the simulation process and educational experience to allow the learner(s) to progressively build upon past experiences and current scenarios within a safe and welcoming environment with a guided dialog to enhance future practice. The aim of this integrative review was to assess current practices of debriefing models in simulation-based education for health care professionals and students. The following databases were utilized for the search: CINAHL Plus, Cochrane Database of Systemic Reviews, EBSCO (ERIC), PsycINFO (Ovid), and Google Scholar. The advanced search option was useful to narrow down the search of articles (full text, Boolean operators, English language, peer-reviewed, published in the past five years). Key terms included debrief, debriefing, debriefing model, debriefing intervention, psychological debriefing, simulation, simulation-based education, simulation pedagogy, health care professional, nursing student, and learning process. Included studies focus on debriefing after clinical scenarios of nursing students, medical students, and interprofessional teams conducted between 2015 and 2020. Common themes were identified after the analysis of articles matching the search criteria. Several debriefing models are addressed in the literature with similarities of effectiveness for participants in clinical simulation-based pedagogy. Themes identified included (a) importance of debriefing in simulation-based pedagogy, (b) environment for which debriefing takes place is an important consideration, (c) individuals who should conduct the debrief, (d) length of debrief, and (e) methodology of the debrief. Debriefing models supported by theoretical frameworks and facilitated by trained staff are vital for a successful debriefing experience. Models differed from self-debriefing, facilitator-led debriefing, video-assisted debriefing, rapid cycle deliberate practice, and reflective debriefing. A reoccurring finding was centered around the emphasis of continued research for systematic tool development and analysis of the validity and effectiveness of current debriefing practices. There is a lack of consistency of debriefing models among nursing curriculum with an increasing rate of ill-prepared faculty to facilitate the debriefing phase of the simulation.

Keywords: debriefing model, debriefing intervention, health care professional, simulation-based education

Procedia PDF Downloads 142
15016 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint

Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar

Abstract:

Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.

Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine

Procedia PDF Downloads 82
15015 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting

Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero

Abstract:

In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling‎) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.

Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling

Procedia PDF Downloads 135
15014 National Defense and Armed Forces Development in the Member States of the Visegrad Group

Authors: E. Hronyecz

Abstract:

Guaranteeing the independence of the V4 Member States, the protection of their national values and their citizens, and the security of the Central and Eastern European region requires the development of military capabilities in terms of the capabilities of nations. As a result, European countries have begun developing capabilities and forces, within which nations are seeking to strengthen the capabilities of their armies and make their interoperability more effective. One aspect of this is the upgrading of military equipment, personnel equipment, and other human resources. Based on the author's preliminary researches - analyzing the scientific literature, the relevant statistical data and conducting of professional consultations with the experts of the research field – it can clearly claimed for all four states of Visegrad Group that a change of direction in the field of defense has been noticeable since the second half of the last decade. Collective defense came to the forefront again; the military training, professionalism, and radical modernization of technical equipment becoming crucial.

Keywords: armed forces, cooperation, development, Visegrad Group

Procedia PDF Downloads 133
15013 On the Factors Affecting Computing Students’ Awareness of the Latest ICTs

Authors: O. D. Adegbehingbe, S. D. Eyono Obono

Abstract:

The education sector is constantly faced with rapid changes in technologies in terms of ensuring that the curriculum is up to date and in terms of making sure that students are aware of these technological changes. This challenge can be seen as the motivation for this study, which is to examine the factors affecting computing students’ awareness of the latest Information Technologies (ICTs). The aim of this study is divided into two sub-objectives which are: the selection of relevant theories and the design of a conceptual model to support it as well as the empirical testing of the designed model. The first objective is achieved by a review of existing literature on technology adoption theories and models. The second objective is achieved using a survey of computing students in the four universities of the KwaZulu-Natal province of South Africa. Data collected from this survey is analyzed using Statistical package for the Social Science (SPSS) using descriptive statistics, ANOVA and Pearson correlations. The main hypothesis of this study is that there is a relationship between the demographics and the prior conditions of the computing students and their awareness of general ICT trends and of Digital Switch Over (DSO) a new technology which involves the change from analog to digital television broadcasting in order to achieve improved spectrum efficiency. The prior conditions of the computing students that were considered in this study are students’ perceived exposure to career guidance and students’ perceived curriculum currency. The results of this study confirm that gender, ethnicity, and high school computing course affect students’ perceived curriculum currency while high school location affects students’ awareness of DSO. The results of this study also confirm that there is a relationship between students prior conditions and their awareness of general ICT trends and DSO in particular.

Keywords: education, information technologies, IDT, awareness

Procedia PDF Downloads 357
15012 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations

Authors: Yehjune Heo

Abstract:

Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.

Keywords: anti-spoofing, CNN, fingerprint recognition, GAN

Procedia PDF Downloads 184
15011 A Study on the Development of Self-Help Therapy for Bipolar Disorder

Authors: Bae Yu been, Choi Sung won, Lee Ju yeon, Yang Dan Bi

Abstract:

The purpose of this study is to develop a self-help therapy program for bipolar disorder (BD). Psychosocial treatment is adjunct to pharmacotherapy for BD, however, it is limited and they demand high costs. Therefore, the objective of the study is to overcome these limitations by developing the self-treatment for BD. The study was examined the efficacy of the self-treatment program for BD. A randomized controlled trial compared the self-help therapy (ST) intervention with a treatment as usual (TAU) group. ST group has conducted the program for 8 weeks (16 sessions). Mood chart, Quality of Life in Bipolar Disorder Questionnaire, Attitudes toward seeking professional help Scale, BIS, CERQ, YMRS, MADRS were used by pre, post, and follow up. The efficacy of the self-help therapy was analyzed by using mixed ANOVAs. There were significant differences in the rate of occurrence of mania or depression between the two groups. ST group reported stable moods on mood chart, and reductions in mood symptoms and improvements in quality of life and treatment adherence. This study was confirmed applicable to BD to the self-help therapy for patients with BD conducted first in Korea.

Keywords: self help therapy, bipolar disorder, self help, self therapy

Procedia PDF Downloads 677
15010 Towards the Reverse Engineering of UML Sequence Diagrams Using Petri Nets

Authors: C. Baidada, M. H. Abidi, A. Jakimi, E. H. El Kinani

Abstract:

Reverse engineering has become a viable method to measure an existing system and reconstruct the necessary model from tis original. The reverse engineering of behavioral models consists in extracting high-level models that help understand the behavior of existing software systems. In this paper, we propose an approach for the reverse engineering of sequence diagrams from the analysis of execution traces produced dynamically by an object-oriented application using petri nets. Our methods show that this approach can produce state diagrams in reasonable time and suggest that these diagrams are helpful in understanding the behavior of the underlying application. Finally we will discuss approachs and tools that are needed in the process of reverse engineering UML behavior. This work is a substantial step towards providing high-quality methodology for effectiveand efficient reverse engineering of sequence diagram.

Keywords: reverse engineering, UML behavior, sequence diagram, execution traces, petri nets

Procedia PDF Downloads 446