Search results for: spiking neuron models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6846

Search results for: spiking neuron models

1416 Comparison of Mcgrath, Pentax, and Macintosh Laryngoscope in Normal and Cervical Immobilized Manikin by Novices

Authors: Jong Yeop Kim, In Kyong Yi, Hyun Jeong Kwak, Sook Young Lee, Sung Yong Park

Abstract:

Background: Several video laryngoscopes (VLs) were used to facilitate tracheal intubation in the normal and potentially difficult airway, especially by novice personnel. The aim of this study was to compare tracheal intubation performance regarding the time to intubation, glottic view, difficulty, and dental click, by a novice using McGrath VL, Pentax Airway Scope (AWS) and Macintosh laryngoscope in normal and cervical immobilized manikin models. Methods: Thirty-five anesthesia nurses without previous intubation experience were recruited. The participants performed endotracheal intubation in a manikin model at two simulated neck positions (normal and fixed neck via cervical immobilization), using three different devices (McGrath VL, Pentax AWS, and Macintosh direct laryngoscope) at three times each. Performance parameters included intubation time, success rate of intubation, Cormack Lehane laryngoscope grading, dental click, and subjective difficulty score. Results: Intubation time and success rate at the first attempt were not significantly different between the 3 groups in normal airway manikin. In the cervical immobilized manikin, the intubation time was shorter (p = 0.012) and the success rate with the first attempt was significantly higher (p < 0.001) when using McGrath VL and Pentax AWS compared with Macintosh laryngoscope. Both VLs showed less difficulty score (p < 0.001) and more Cormack Lehane grade I (p < 0.001). The incidence of dental clicks was higher with McGrath VL than Macintosh laryngoscope in the normal and cervical immobilized airway (p = 0.005, p < 0.001, respectively). Conclusion: McGrath VL and Pentax AWS resulted in shorter intubation time, higher first attempt success rate, compared with Macintosh laryngoscope by a novice intubator in a cervical immobilized manikin model. McGrath VL could be reduced the risk of dental injury compared with Macintosh laryngoscope in this scenario.

Keywords: intubation, manikin, novice, videolaryngoscope

Procedia PDF Downloads 158
1415 Decision Making in Medicine and Treatment Strategies

Authors: Kamran Yazdanbakhsh, Somayeh Mahmoudi

Abstract:

Three reasons make good use of the decision theory in medicine: 1. Increased medical knowledge and their complexity makes it difficult treatment information effectively without resorting to sophisticated analytical methods, especially when it comes to detecting errors and identify opportunities for treatment from databases of large size. 2. There is a wide geographic variability of medical practice. In a context where medical costs are, at least in part, by the patient, these changes raise doubts about the relevance of the choices made by physicians. These differences are generally attributed to differences in estimates of probabilities of success of treatment involved, and differing assessments of the results on success or failure. Without explicit criteria for decision, it is difficult to identify precisely the sources of these variations in treatment. 3. Beyond the principle of informed consent, patients need to be involved in decision-making. For this, the decision process should be explained and broken down. A decision problem is to select the best option among a set of choices. The problem is what is meant by "best option ", or know what criteria guide the choice. The purpose of decision theory is to answer this question. The systematic use of decision models allows us to better understand the differences in medical practices, and facilitates the search for consensus. About this, there are three types of situations: situations certain, risky situations, and uncertain situations: 1. In certain situations, the consequence of each decision are certain. 2. In risky situations, every decision can have several consequences, the probability of each of these consequences is known. 3. In uncertain situations, each decision can have several consequences, the probability is not known. Our aim in this article is to show how decision theory can usefully be mobilized to meet the needs of physicians. The decision theory can make decisions more transparent: first, by clarifying the data systematically considered the problem and secondly by asking a few basic principles should guide the choice. Once the problem and clarified the decision theory provides operational tools to represent the available information and determine patient preferences, and thus assist the patient and doctor in their choices.

Keywords: decision making, medicine, treatment strategies, patient

Procedia PDF Downloads 579
1414 Criminal Laws Associated with Cyber-Medicine and Telemedicine in Current Law Systems in the World

Authors: Shahryar Eslamitabar

Abstract:

Currently, the internet plays an important role in the various scientific, commercial and service practices. Thanks to information and communication technology, the healthcare industry via the internet, generally known as cyber-medicine, can offer professional medical service in a wider geographical area. Having some appealing benefits such as convenience in offering healthcare services, improved accessibility to the services, enhanced information exchange, cost-effectiveness, time-saving, etc. Tele-health has increasingly developed innovative models of healthcare delivery. However, it presents many potential hazards to cyber-patients, inherent in the use of the system. First, there are legal issues associated with the communication and transfer of information on the internet. These include licensure, malpractice, liabilities and jurisdictions as well as privacy, confidentiality and security of personal data as the most important challenge brought about by this system. Additional items of concern are technological and ethical. Although, there are some rules to deal with pitfalls associated with cyber-medicine practices in the USA and some European countries, yet for all developments, it is being practiced in a legal vacuum in many countries. In addition to the domestic legislations to deal with potential problems arisen from the system, it is also imperative that some international or regional agreement should be developed to achieve the harmonization of laws among countries and states. This article discusses some implications posed by the practice of cyber-medicine in the healthcare system according to the experience of some developed countries using a comparative study of laws. It will also review the status of tele-health laws in Iran. Finally, it is intended to pave the way to outline a plan for countries like Iran, with newly-established judicial system for health laws, to develop appropriate regulations through providing some recommendations.

Keywords: tele-health, cyber-medicine, telemedicine, criminal laws, legislations, time-saving

Procedia PDF Downloads 661
1413 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection

Authors: Yulan Wu

Abstract:

With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 96
1412 Probability Sampling in Matched Case-Control Study in Drug Abuse

Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell

Abstract:

Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.

Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling

Procedia PDF Downloads 493
1411 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification

Authors: Oumaima Khlifati, Khadija Baba

Abstract:

Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.

Keywords: distress pavement, hyperparameters, automatic classification, deep learning

Procedia PDF Downloads 93
1410 Understanding the Utilization of Luffa Cylindrica in the Adsorption of Heavy Metals to Clean Up Wastewater

Authors: Akanimo Emene, Robert Edyvean

Abstract:

In developing countries, a low cost method of wastewater treatment is highly recommended. Adsorption is an efficient and economically viable treatment process for wastewater. The utilisation of this process is based on the understanding of the relationship between the growth environment and the metal capacity of the biomaterial. Luffa cylindrica (LC), a plant material, was used as an adsorbent in adsorption design system of heavy metals. The chemically modified LC was used to adsorb heavy metals ions, lead and cadmium, from aqueous environmental solution at varying experimental conditions. Experimental factors, adsorption time, initial metal ion concentration, ionic strength and pH of solution were studied. The chemical nature and surface area of the tissues adsorbing heavy metals in LC biosorption systems were characterised by using electron microscopy and infra-red spectroscopy. It showed an increase in the surface area and improved adhesion capacity after chemical treatment. Metal speciation of the metal ions showed the binary interaction between the ions and the LC surface as the pH increases. Maximum adsorption was shown between pH 5 and pH 6. The ionic strength of the metal ion solution has an effect on the adsorption capacity based on the surface charge and the availability of the adsorption sites on the LC. The nature of the metal-surface complexes formed as a result of the experimental data were analysed with kinetic and isotherm models. The pseudo second order kinetic model and the two-site Langmuir isotherm model showed the best fit. Through the understanding of this process, there will be an opportunity to provide an alternative method for water purification. This will be provide an option, for when expensive water treatment technologies are not viable in developing countries.

Keywords: adsorption, luffa cylindrica, metal-surface complexes, pH

Procedia PDF Downloads 89
1409 Expert Solutions to Affordable Housing Finance Challenges in Developing Economies

Authors: Timothy Akinwande, Eddie C. M. Hui

Abstract:

Housing the urban poor has remained a challenge for many years across the world, especially in developing economies, despite the apparent research attention and policy interventions. It is apt to investigate the prevalent affordable housing (AH) provision challenges using unconventional approaches. It is pragmatic to thoroughly examine housing experts to provide supply-side solutions to AH challenges and investigate informal settlers to deduce solutions from AH demand viewpoints. This study being the supply-side investigation of an ongoing research, interrogated housing experts to determine significant expert solutions. Focus group discussions and in-depth interviews were conducted with housing experts in Nigeria. Through descriptive, content, and systematic thematic analyses of data, major findings are that deliberate finance models designed for the urban poor are the most significant housing finance solution in developing economies. Other findings are that adequately implemented rent control policies, deliberate PPP approaches like inclusionary housing and land-value capture, and urban renewal programmes to enlighten and tutor the urban poor on how to earn more, spend wisely, and invest in their own better housing will effectively solve AH finance challenges. Study findings are informative for the best approaches to achieve effective, affordable housing finance for the urban poor in Nigeria, which is indispensable for the achievement of sustainable development goals. This research’s originality lies in the exploration of experts’ opinions in relation to AH finance to produce an equation model of critical solutions to AH finance challenges. Study data are useful resources for future pro-poor housing studies. This study makes housing policy-oriented recommendations toward effective, affordable housing for the urban poor in developing countries.

Keywords: affordable housing, effective affordable housing, housing policy, housing research, sustainable development, urban poor

Procedia PDF Downloads 86
1408 Empowering Children through Co-creation: Writing a Book with and for Children about Their First Steps Towards Urban Independence

Authors: Beata Patuszynska

Abstract:

Children are largely absent from Polish social discourse, a fact which is mirrored in urban planning processes. Their absence creates a vicious circle – an unfriendly urban space discourages children from going outside on their own, meaning adults do not see a need to make spaces more friendly for a group, not present. The pandemic and lockdown, with their closed schools and temporary ban on unaccompanied minors on the streets, have only reinforced this. The project – co-writing with children a book concerning their first steps into urban independence - aims at empowering children, enabling them to find their voice when it comes to urban space. The foundation for the book was data collected during research and workshops with children from Warsaw primary schools, aged 7-10 - the age they begin independent travel in the city. The project was carried out with the participation and involvement of children at each creative step. Children were (1) models: the narrator is an 7-year-old boy getting ready for urban independence. He shares his experience as well as the experience of his school friends and his 10-year-old sister, who already travels on her own. Children were (2) teachers: the book is based on authentic children’s stories and experience, along with the author’s findings from research undertaken with children. The material was extended by observations and conclusions made during the pandemic. Children were (3) reviewers: a series of draft chapters from the book underwent review by children during workshops performed in a school. The process demonstrated that all children experience similar pleasures and worries when it comes to interaction with urban space. Furthermore, they also have similar needs that need satisfying. In my article, I will discuss; (1) the advantages of creating together with children; (2) my conclusions on how to work with children in participatory processes; (3) research results: perceptions of urban space by children age 7-10, when they begin their independent travel in the city; the barriers to and pleasures derived from independent urban travel; the influence of the pandemic on children’s feelings and their behaviour in urban spaces.

Keywords: children, urban space, co-creation, participation, human rights

Procedia PDF Downloads 103
1407 An AI-generated Semantic Communication Platform in HCI Course

Authors: Yi Yang, Jiasong Sun

Abstract:

Almost every aspect of our daily lives is now intertwined with some degree of human-computer interaction (HCI). HCI courses draw on knowledge from disciplines as diverse as computer science, psychology, design principles, anthropology, and more. Our HCI courses, named the Media and Cognition course, are constantly updated to reflect state-of-the-art technological advancements such as virtual reality, augmented reality, and artificial intelligence-based interactions. For more than a decade, our course has used an interest-based approach to teaching, in which students proactively propose some research-based questions and collaborate with teachers, using course knowledge to explore potential solutions. Semantic communication plays a key role in facilitating understanding and interaction between users and computer systems, ultimately enhancing system usability and user experience. The advancements in AI-generated technology, which have gained significant attention from both academia and industry in recent years, are exemplified by language models like GPT-3 that generate human-like dialogues from given prompts. Our latest version of the Human-Computer Interaction course practices a semantic communication platform based on AI-generated techniques. The purpose of this semantic communication is twofold: to extract and transmit task-specific information while ensuring efficient end-to-end communication with minimal latency. An AI-generated semantic communication platform evaluates the retention of signal sources and converts low-retain ability visual signals into textual prompts. These data are transmitted through AI-generated techniques and reconstructed at the receiving end; on the other hand, visual signals with a high retain ability rate are compressed and transmitted according to their respective regions. The platform and associated research are a testament to our students' growing ability to independently investigate state-of-the-art technologies.

Keywords: human-computer interaction, media and cognition course, semantic communication, retainability, prompts

Procedia PDF Downloads 115
1406 Modelling of Heat Generation in a 18650 Lithium-Ion Battery Cell under Varying Discharge Rates

Authors: Foo Shen Hwang, Thomas Confrey, Stephen Scully, Barry Flannery

Abstract:

Thermal characterization plays an important role in battery pack design. Lithium-ion batteries have to be maintained between 15-35 °C to operate optimally. Heat is generated (Q) internally within the batteries during both the charging and discharging phases. This can be quantified using several standard methods. The most common method of calculating the batteries heat generation is through the addition of both the joule heating effects and the entropic changes across the battery. In addition, such values can be derived by identifying the open-circuit voltage (OCV), nominal voltage (V), operating current (I), battery temperature (T) and the rate of change of the open-circuit voltage in relation to temperature (dOCV/dT). This paper focuses on experimental characterization and comparative modelling of the heat generation rate (Q) across several current discharge rates (0.5C, 1C, and 1.5C) of a 18650 cell. The analysis is conducted utilizing several non-linear mathematical functions methods, including polynomial, exponential, and power models. Parameter fitting is carried out over the respective function orders; polynomial (n = 3~7), exponential (n = 2) and power function. The generated parameter fitting functions are then used as heat source functions in a 3-D computational fluid dynamics (CFD) solver under natural convection conditions. Generated temperature profiles are analyzed for errors based on experimental discharge tests, conducted at standard room temperature (25°C). Initial experimental results display low deviation between both experimental and CFD temperature plots. As such, the heat generation function formulated could be easier utilized for larger battery applications than other methods available.

Keywords: computational fluid dynamics, curve fitting, lithium-ion battery, voltage drop

Procedia PDF Downloads 95
1405 Investment and Economic Growth: An Empirical Analysis for Tanzania

Authors: Manamba Epaphra

Abstract:

This paper analyzes the causal effect between domestic private investment, public investment, foreign direct investment and economic growth in Tanzania during the 1970-2014 period. The modified neo-classical growth model that includes control variables such as trade liberalization, life expectancy and macroeconomic stability proxied by inflation is used to estimate the impact of investment on economic growth. Also, the economic growth models based on Phetsavong and Ichihashi (2012), and Le and Suruga (2005) are used to estimate the crowding out effect of public investment on private domestic investment on one hand and foreign direct investment on the other hand. A correlation test is applied to check the correlation among independent variables, and the results show that there is very low correlation suggesting that multicollinearity is not a serious problem. Moreover, the diagnostic tests including RESET regression errors specification test, Breusch-Godfrey serial correlation LM test, Jacque-Bera-normality test and white heteroskedasticity test reveal that the model has no signs of misspecification and that, the residuals are serially uncorrelated, normally distributed and homoskedastic. Generally, the empirical results show that the domestic private investment plays an important role in economic growth in Tanzania. FDI also tends to affect growth positively, while control variables such as high population growth and inflation appear to harm economic growth. Results also reveal that control variables such as trade openness and life expectancy improvement tend to increase real GDP growth. Moreover, a revealed negative, albeit weak, association between public and private investment suggests that the positive effect of domestic private investment on economic growth reduces when public investment-to-GDP ratio exceeds 8-10 percent. Thus, there is a great need for promoting domestic saving so as to encourage domestic investment for economic growth.

Keywords: FDI, public investment, domestic private investment, crowding out effect, economic growth

Procedia PDF Downloads 290
1404 Prevalence of Mycobacterium Tuberculosis Infection and Rifampicin Resistance among Presumptive Tuberculosis Cases Visiting Tuberculosis Clinic of Adare General Hospital, Southern Ethiopia

Authors: Degineh Belachew Andarge, Tariku Lambiyo Anticho, Getamesay Mulatu Jara, Musa Mohammed Ali

Abstract:

Introduction: Tuberculosis (TB) is a communicable chronic disease causedby Mycobacterium tuberculosis (MTB). About one-third of the world’s population is latently infected with MTB. TB is among the top 10 causes of mortality throughout the globe from a single pathogen. Objective: The aim of this study was to determine the prevalence of tuberculosis,rifampicin-resistant/multidrug-resistant Mycobacterium tuberculosis, and associated factors among presumptive tuberculosis cases attending the tuberculosis clinic of Adare General Hospital located in Hawassa city. Methods: A hospital-based cross-sectional study was conducted among 321 tuberculosis suspected patients from April toJuly 2018. Socio-demographic, environmental, and behavioral data were collected using a structured questionnaire. Sputumspecimens were analyzed using GeneXpert. Data entry was made using Epi info version 7 and analyzed by SPSS version 20. Logistic regression models were used to determine the risk factors. A p-value less than 0.05 was taken as a cut point. Results: In this study, the prevalence of Mycobacterium tuberculosis was 98 (30.5%) with 95% confidence interval (25.5–35.8), and the prevalence of rifampicin-resistant/multidrug-resistantMycobacterium tuberculosis among the 98 Mycobacteriumtuberculosis confirmed cases was 4 (4.1%). The prevalence of rifampicin-resistant/multidrug-resistant Mycobacterium tuberculosisamong the tuberculosis suspected patients was 1.24%. Participants who had a history of treatment with anti-tuberculosisdrugs were more likely to develop rifampicin-resistant/multidrug-resistant Mycobacterium tuberculosis. Conclusions: This study identified relatively high rifampicin-resistant/multidrug-resistant Mycobacterium tuberculosis amongtuberculosis suspected patients in the study area. Early detection of drug-resistant Mycobacterium tuberculosis should be givenenough attention to strengthen the management of tuberculosis cases and improve direct observation therapy short-course and eventually minimize the spread of rifampicin-resistant tuberculosis strain in the community.

Keywords: rifampicin resistance, mycobacterium tuberculosis, risk factors, prevalence of TB

Procedia PDF Downloads 111
1403 Cybersecurity for Digital Twins in the Built Environment: Research Landscape, Industry Attitudes and Future Direction

Authors: Kaznah Alshammari, Thomas Beach, Yacine Rezgui

Abstract:

Technological advances in the construction sector are helping to make smart cities a reality by means of cyber-physical systems (CPS). CPS integrate information and the physical world through the use of information communication technologies (ICT). An increasingly common goal in the built environment is to integrate building information models (BIM) with the Internet of Things (IoT) and sensor technologies using CPS. Future advances could see the adoption of digital twins, creating new opportunities for CPS using monitoring, simulation, and optimisation technologies. However, researchers often fail to fully consider the security implications. To date, it is not widely possible to assimilate BIM data and cybersecurity concepts, and, therefore, security has thus far been overlooked. This paper reviews the empirical literature concerning IoT applications in the built environment and discusses real-world applications of the IoT intended to enhance construction practices, people’s lives and bolster cybersecurity. Specifically, this research addresses two research questions: (a) how suitable are the current IoT and CPS security stacks to address the cybersecurity threats facing digital twins in the context of smart buildings and districts? and (b) what are the current obstacles to tackling cybersecurity threats to the built environment CPS? To answer these questions, this paper reviews the current state-of-the-art research concerning digital twins in the built environment, the IoT, BIM, urban cities, and cybersecurity. The results of these findings of this study confirmed the importance of using digital twins in both IoT and BIM. Also, eight reference zones across Europe have gained special recognition for their contributions to the advancement of IoT science. Therefore, this paper evaluates the use of digital twins in CPS to arrive at recommendations for expanding BIM specifications to facilitate IoT compliance, bolster cybersecurity and integrate digital twin and city standards in the smart cities of the future.

Keywords: BIM, cybersecurity, digital twins, IoT, urban cities

Procedia PDF Downloads 169
1402 Stoa: Urban Community-Building Social Experiment through Mixed Reality Game Environment

Authors: Radek Richtr, Petr Pauš

Abstract:

Social media nowadays connects people more tightly and intensively than ever, but simultaneously, some sort of social distance, incomprehension, lost of social integrity appears. People can be strongly connected to the person on the other side of the world but unaware of neighbours in the same district or street. The Stoa is a type of application from the ”serious games” genre- it is research augmented reality experiment masked as a gaming environment. In the Stoa environment, the player can plant and grow virtual (organic) structure, a Pillar, that represent the whole suburb. Everybody has their own idea of what is an acceptable, admirable or harmful visual intervention in the area they live in; the purpose of this research experiment is to find and/or define residents shared subconscious spirit, genius loci of the Pillars vicinity, where residents live in. The appearance and evolution of Stoa’s Pillars reflect the real world as perceived by not only the creator but also by other residents/players, who, with their actions, refine the environment. Squares, parks, patios and streets get their living avatar depictions; investors and urban planners obtain information on the occurrence and level of motivation for reshaping the public space. As the project is in product conceptual design phase, the function is one of its most important factors. Function-based modelling makes design problem modular and structured and thus decompose it into sub-functions or function-cells. Paper discuss the current conceptual model for Stoa project, the using of different organic structure textures and models, user interface design, UX study and project’s developing to the final state.

Keywords: augmented reality, urban computing, interaction design, mixed reality, social engineering

Procedia PDF Downloads 228
1401 Impact of Leadership Styles on Work Motivation and Organizational Commitment among Faculty Members of Public Sector Universities in Punjab

Authors: Wajeeha Shahid

Abstract:

The study was designed to assess the impact of transformational and transactional leadership styles on work motivation and organizational commitment among faculty members of universities of Punjab. 713 faculty members were selected as sample through convenient random sampling technique. Three self-constructed questionnaires namely Leadership Styles Questionnaire (LSQ), Work Motivation Questionnaire (WMQ) and Organizational Commitment Questionnaire (OCMQ) were used as research instruments. Major objectives of the study included assessing the effect and impact of transformational and transactional leadership styles on work motivation and organizational commitment. Theoretical frame work of the study included Idealized Influence, Inspirational Motivation, Intellectual Stimulation, Individualized Consideration, Contingent Rewards and Management by Exception as independent variables and Extrinsic motivation, Intrinsic motivation, Affective commitment, Continuance commitment and Normative commitment as dependent variables. SPSS Version 21 was used to analyze and tabulate data. Cronbach's Alpha reliability, Pearson Correlation and Multiple regression analysis were applied as statistical treatments for the analysis. Results revealed that Idealized Influence correlated significantly with intrinsic motivation and Affective commitment whereas Contingent rewards had a strong positive correlation with extrinsic motivation and affective commitment. Multiple regression models revealed a variance of 85% for transformational leadership style over work motivation and organizational commitment. Whereas transactional style as a predictor manifested a variance of 79% for work motivation and 76% for organizational commitment. It was suggested that changing organizational cultures are demanding more from their leadership. All organizations need to consider transformational leadership style as an important part of their equipment in leveraging both soft and hard organizational targets.

Keywords: leadership styles, work motivation, organizational commitment, faculty member

Procedia PDF Downloads 308
1400 Improving Digital Data Security Awareness among Teacher Candidates with Digital Storytelling Technique

Authors: Veysel Çelik, Aynur Aker, Ebru Güç

Abstract:

Developments in information and communication technologies have increased both the speed of producing information and the speed of accessing new information. Accordingly, the daily lives of individuals have started to change. New concepts such as e-mail, e-government, e-school, e-signature have emerged. For this reason, prospective teachers who will be future teachers or school administrators are expected to have a high awareness of digital data security. The aim of this study is to reveal the effect of the digital storytelling technique on the data security awareness of pre-service teachers of computer and instructional technology education departments. For this purpose, participants were selected based on the principle of volunteering among third-grade students studying at the Computer and Instructional Technologies Department of the Faculty of Education at Siirt University. In the research, the pretest/posttest half experimental research model, one of the experimental research models, was used. In this framework, a 6-week lesson plan on digital data security awareness was prepared in accordance with the digital narration technique. Students in the experimental group formed groups of 3-6 people among themselves. The groups were asked to prepare short videos or animations for digital data security awareness. The completed videos were watched and evaluated together with prospective teachers during the evaluation process, which lasted approximately 2 hours. In the research, both quantitative and qualitative data collection tools were used by using the digital data security awareness scale and the semi-structured interview form consisting of open-ended questions developed by the researchers. According to the data obtained, it was seen that the digital storytelling technique was effective in creating data security awareness and creating permanent behavior changes for computer and instructional technology students.

Keywords: digital storytelling, self-regulation, digital data security, teacher candidates, self-efficacy

Procedia PDF Downloads 126
1399 NLRP3-Inflammassome Participates in the Inflammatory Response Induced by Paracoccidioides brasiliensis

Authors: Eduardo Kanagushiku Pereira, Frank Gregory Cavalcante da Silva, Barbara Soares Gonçalves, Ana Lúcia Bergamasco Galastri, Ronei Luciano Mamoni

Abstract:

The inflammatory response initiates after the recognition of pathogens by receptors expressed by innate immune cells. Among these receptors, the NLRP3 was associated with the recognition of pathogenic fungi in experimental models. NLRP3 operates forming a multiproteic complex called inflammasome, which actives caspase-1, responsible for the production of the inflammatory cytokines IL-1beta and IL-18. In this study, we aimed to investigate the involvement of NLRP3 in the inflammatory response elicited in macrophages against Paracoccidioides brasiliensis (Pb), the etiologic agent of PCM. Macrophages were differentiated from THP-1 cells by treatment with phorbol-myristate-acetate. Following differentiation, macrophages were stimulated by Pb yeast cells for 24 hours, after previous treatment with specific NLRP3 (3,4-methylenedioxy-beta-nitrostyrene) and/or caspase-1 (VX-765) inhibitors, or specific inhibitors of pathways involved in NLRP3 activation such as: Reactive Oxigen Species (ROS) production (N-Acetyl-L-cysteine), K+ efflux (Glibenclamide) or phagossome acidification (Bafilomycin). Quantification of IL-1beta and IL-18 in supernatants was performed by ELISA. Our results showed that the production of IL-1beta and IL-18 by THP-1-derived-macrophages stimulated with Pb yeast cells was dependent on NLRP3 and caspase-1 activation, once the presence of their specific inhibitors diminished the production of these cytokines. Furthermore, we found that the major pathways involved in NLRP3 activation, after Pb recognition, were dependent on ROS production and K+ efflux. In conclusion, our results showed that NLRP3 participates in the recognition of Pb yeast cells by macrophages, leading to the activation of the NLRP3-inflammasome and production of IL-1beta and IL-18. Together, these cytokines can induce an inflammatory response against P. brasiliensis, essential for the establishment of the initial inflammatory response and for the development of the subsequent acquired immune response.

Keywords: inflammation, IL-1beta, IL-18, NLRP3, Paracoccidioidomycosis

Procedia PDF Downloads 273
1398 Quantitative Evaluation of Supported Catalysts Key Properties from Electron Tomography Studies: Assessing Accuracy Using Material-Realistic 3D-Models

Authors: Ainouna Bouziane

Abstract:

The ability of Electron Tomography to recover the 3D structure of catalysts, with spatial resolution in the subnanometer scale, has been widely explored and reviewed in the last decades. A variety of experimental techniques, based either on Transmission Electron Microscopy (TEM) or Scanning Transmission Electron Microscopy (STEM) have been used to reveal different features of nanostructured catalysts in 3D, but High Angle Annular Dark Field imaging in STEM mode (HAADF-STEM) stands out as the most frequently used, given its chemical sensitivity and avoidance of imaging artifacts related to diffraction phenomena when dealing with crystalline materials. In this regard, our group has developed a methodology that combines image denoising by undecimated wavelet transforms (UWT) with automated, advanced segmentation procedures and parameter selection methods using CS-TVM (Compressed Sensing-total variation minimization) algorithms to reveal more reliable quantitative information out of the 3D characterization studies. However, evaluating the accuracy of the magnitudes estimated from the segmented volumes is also an important issue that has not been properly addressed yet, because a perfectly known reference is needed. The problem particularly complicates in the case of multicomponent material systems. To tackle this key question, we have developed a methodology that incorporates volume reconstruction/segmentation methods. In particular, we have established an approach to evaluate, in quantitative terms, the accuracy of TVM reconstructions, which considers the influence of relevant experimental parameters like the range of tilt angles, image noise level or object orientation. The approach is based on the analysis of material-realistic, 3D phantoms, which include the most relevant features of the system under analysis.

Keywords: electron tomography, supported catalysts, nanometrology, error assessment

Procedia PDF Downloads 87
1397 Graphical Theoretical Construction of Discrete time Share Price Paths from Matroid

Authors: Min Wang, Sergey Utev

Abstract:

The lessons from the 2007-09 global financial crisis have driven scientific research, which considers the design of new methodologies and financial models in the global market. The quantum mechanics approach was introduced in the unpredictable stock market modeling. One famous quantum tool is Feynman path integral method, which was used to model insurance risk by Tamturk and Utev and adapted to formalize the path-dependent option pricing by Hao and Utev. The research is based on the path-dependent calculation method, which is motivated by the Feynman path integral method. The path calculation can be studied in two ways, one way is to label, and the other is computational. Labeling is a part of the representation of objects, and generating functions can provide many different ways of representing share price paths. In this paper, the recent works on graphical theoretical construction of individual share price path via matroid is presented. Firstly, a study is done on the knowledge of matroid, relationship between lattice path matroid and Tutte polynomials and ways to connect points in the lattice path matroid and Tutte polynomials is suggested. Secondly, It is found that a general binary tree can be validly constructed from a connected lattice path matroid rather than general lattice path matroid. Lastly, it is suggested that there is a way to represent share price paths via a general binary tree, and an algorithm is developed to construct share price paths from general binary trees. A relationship is also provided between lattice integer points and Tutte polynomials of a transversal matroid. Use this way of connection together with the algorithm, a share price path can be constructed from a given connected lattice path matroid.

Keywords: combinatorial construction, graphical representation, matroid, path calculation, share price, Tutte polynomial

Procedia PDF Downloads 138
1396 Model-Based Approach as Support for Product Industrialization: Application to an Optical Sensor

Authors: Frederic Schenker, Jonathan J. Hendriks, Gianluca Nicchiotti

Abstract:

In a product industrialization perspective, the end-product shall always be at the peak of technological advancement and developed in the shortest time possible. Thus, the constant growth of complexity and a shorter time-to-market calls for important changes on both the technical and business level. Undeniably, the common understanding of the system is beclouded by its complexity which leads to the communication gap between the engineers and the sale department. This communication link is therefore important to maintain and increase the information exchange between departments to ensure a punctual and flawless delivery to the end customer. This evolution brings engineers to reason with more hindsight and plan ahead. In this sense, they use new viewpoints to represent the data and to express the model deliverables in an understandable way that the different stakeholder may identify their needs and ideas. This article focuses on the usage of Model-Based System Engineering (MBSE) in a perspective of system industrialization and reconnect the engineering with the sales team. The modeling method used and presented in this paper concentrates on displaying as closely as possible the needs of the customer. Firstly, by providing a technical solution to the sales team to help them elaborate commercial offers without omitting technicalities. Secondly, the model simulates between a vast number of possibilities across a wide range of components. It becomes a dynamic tool for powerful analysis and optimizations. Thus, the model is no longer a technical tool for the engineers, but a way to maintain and solidify the communication between departments using different views of the model. The MBSE contribution to cost optimization during New Product Introduction (NPI) activities is made explicit through the illustration of a case study describing the support provided by system models to architectural choices during the industrialization of a novel optical sensor.

Keywords: analytical model, architecture comparison, MBSE, product industrialization, SysML, system thinking

Procedia PDF Downloads 161
1395 Evaluating Structural Crack Propagation Induced by Soundless Chemical Demolition Agent Using an Energy Release Rate Approach

Authors: Shyaka Eugene

Abstract:

The efficient and safe demolition of structures is a critical challenge in civil engineering and construction. This study focuses on the development of optimal demolition strategies by investigating the crack propagation behavior in beams induced by soundless cracking agents. It is commonly used in controlled demolition and has gained prominence due to its non-explosive and environmentally friendly nature. This research employs a comprehensive experimental and computational approach to analyze the crack initiation, propagation, and eventual failure in beams subjected to soundless cracking agents. Experimental testing involves the application of various cracking agents under controlled conditions to understand their effects on the structural integrity of beams. High-resolution imaging and strain measurements are used to capture the crack propagation process. In parallel, numerical simulations are conducted using advanced finite element analysis (FEA) techniques to model crack propagation in beams, considering various parameters such as cracking agent composition, loading conditions, and beam properties. The FEA models are validated against experimental results, ensuring their accuracy in predicting crack propagation patterns. The findings of this study provide valuable insights into optimizing demolition strategies, allowing engineers and demolition experts to make informed decisions regarding the selection of cracking agents, their application techniques, and structural reinforcement methods. Ultimately, this research contributes to enhancing the safety, efficiency, and sustainability of demolition practices in the construction industry, reducing environmental impact and ensuring the protection of adjacent structures and the surrounding environment.

Keywords: expansion pressure, energy release rate, soundless chemical demolition agent, crack propagation

Procedia PDF Downloads 63
1394 Utilization of Activated Carbon for the Extraction and Separation of Methylene Blue in the Presence of Acid Yellow 61 Using an Inclusion Polymer Membrane

Authors: Saâd Oukkass, Abderrahim Bouftou, Rachid Ouchn, L. Lebrun, Miloudi Hlaibi

Abstract:

We invariably exist in a world steeped in colors, whether in our clothing, food, cosmetics, or even medications. However, most of the dyes we use pose significant problems, being both harmful to the environment and resistant to degradation. Among these dyes, methylene blue and acid yellow 61 stand out, commonly used to dye various materials such as cotton, wood, and silk. Fortunately, various methods have been developed to treat and remove these polluting dyes, among which membrane processes play a prominent role. These methods are praised for their low energy consumption, ease of operation, and their ability to achieve effective separation of components. Adsorption on activated carbon is also a widely employed technique, complementing the basic processes. It proves particularly effective in capturing and removing organic compounds from water due to its substantial specific surface area while retaining its properties unchanged. In the context of our study, we examined two crucial aspects. Firstly, we explored the possibility of selectively extracting methylene blue from a mixture containing another dye, acid yellow 61, using a polymer inclusion membrane (PIM) made of PVA. After characterizing the morphology and porosity of the membrane, we applied kinetic and thermodynamic models to determine the values of permeability (P), initial flux (J0), association constant (Kass), and apparent diffusion coefficient (D*). Subsequently, we measured activation parameters (activation energy (Ea), enthalpy (ΔH#ass), entropy (ΔS#)). Finally, we studied the effect of activated carbon on the processes carried out through the membrane, demonstrating a clear improvement. These results make the membrane developed in this study a potentially pivotal player in the field of membrane separation.

Keywords: dyes, methylene blue, membrane, activated carbon

Procedia PDF Downloads 81
1393 Effect of Cooking Time, Seed-To-Water Ratio and Soaking Time on the Proximate Composition and Functional Properties of Tetracarpidium conophorum (Nigerian Walnut) Seeds

Authors: J. O. Idoko, C. N. Michael, T. O. Fasuan

Abstract:

This study investigated the effects of cooking time, seed-to-water ratio and soaking time on proximate and functional properties of African walnut seed using Box-Behnken design and Response Surface Methodology (BBD-RSM) with a view to increase its utilization in the food industry. African walnut seeds were sorted washed, soaked, cooked, dehulled, sliced, dried and milled. Proximate analysis and functional properties of the samples were evaluated using standard procedures. Data obtained were analyzed using descriptive and inferential statistics. Quadratic models were obtained to predict the proximate and functional qualities as a function of cooking time, seed-to-water ratio and soaking time. The results showed that the crude protein ranged between 11.80% and 23.50%, moisture content ranged between 1.00% and 4.66%, ash content ranged between 3.35% and 5.25%, crude fibre ranged from 0.10% to 7.25% and carbohydrate ranged from 1.22% to 29.35%. The functional properties showed that soluble protein ranged from 16.26% to 42.96%, viscosity ranged from 23.43 mPas to 57 mPas, emulsifying capacity ranged from 17.14% to 39.43% and water absorption capacity ranged from 232% to 297%. An increase in the volume of water used during cooking resulted in loss of water soluble protein through leaching, the length of soaking time and the moisture content of the dried product are inversely related, ash content is inversely related to the cooking time and amount of water used, extraction of fat is enhanced by increase in soaking time while increase in cooking and soaking times result into decrease in fibre content. The results obtained indicated that African walnut could be used in several food formulations as protein supplement and binder.

Keywords: African walnut, functional properties, proximate analysis, response surface methodology

Procedia PDF Downloads 396
1392 Energy Consumption and Economic Growth Nexus: a Sustainability Understanding from the BRICS Economies

Authors: Smart E. Amanfo

Abstract:

Although the exact functional relationship between energy consumption and economic growth and development remains a complex social science, there is a sustained growing of agreement among energy economists and the likes on direct or indirect role of energy use in the development process, and as sustenance for many of societal achieved socio-economic and environmental developments in any economy. According to OECD, the world economy will double by 2050 in which the two members of BRICS (Brazil, Russia, India, China and South Africa) countries: China and India lead. There is a global apprehension that if countries constituting the epicenter of the present and future economic growth follow the same trajectory as during and after Industrial Revolution, involving higher energy throughputs, especially fossil fuels, the already known and models predicted threats of climate change and global warming could be exacerbated, especially in the developing economies. The international community’s challenge is how to address the trilemma of economic growth, social development, poverty eradication and stability of the ecological systems. This paper aims at providing the estimates of economic growth, energy consumption, and carbon dioxide emissions using BRICS members’ panel data from 1980 to 2017. The preliminary results based on fixed effect econometric model show positive significant relationship between energy consumption and economic growth. The paper further identified a strong relationship between economic growth and CO2 emissions which suggests that the global agenda of low-carbon-led growth and development is not a straight forward achievable The study therefore highlights the need for BRICS member states to intensify low-emissions-based production and consumption policies, increase renewables in order to avoid further deterioration of climate change impacts.

Keywords: BRICS, sustainability, sustainable development, energy consumption, economic growth

Procedia PDF Downloads 94
1391 A Multi-Output Network with U-Net Enhanced Class Activation Map and Robust Classification Performance for Medical Imaging Analysis

Authors: Jaiden Xuan Schraut, Leon Liu, Yiqiao Yin

Abstract:

Computer vision in medical diagnosis has achieved a high level of success in diagnosing diseases with high accuracy. However, conventional classifiers that produce an image to-label result provides insufficient information for medical professionals to judge and raise concerns over the trust and reliability of a model with results that cannot be explained. In order to gain local insight into cancerous regions, separate tasks such as imaging segmentation need to be implemented to aid the doctors in treating patients, which doubles the training time and costs which renders the diagnosis system inefficient and difficult to be accepted by the public. To tackle this issue and drive AI-first medical solutions further, this paper proposes a multi-output network that follows a U-Net architecture for image segmentation output and features an additional convolutional neural networks (CNN) module for auxiliary classification output. Class activation maps are a method of providing insight into a convolutional neural network’s feature maps that leads to its classification but in the case of lung diseases, the region of interest is enhanced by U-net-assisted Class Activation Map (CAM) visualization. Therefore, our proposed model combines image segmentation models and classifiers to crop out only the lung region of a chest X-ray’s class activation map to provide a visualization that improves the explainability and is able to generate classification results simultaneously which builds trust for AI-led diagnosis systems. The proposed U-Net model achieves 97.61% accuracy and a dice coefficient of 0.97 on testing data from the COVID-QU-Ex Dataset which includes both diseased and healthy lungs.

Keywords: multi-output network model, U-net, class activation map, image classification, medical imaging analysis

Procedia PDF Downloads 202
1390 A Data-Driven Agent Based Model for the Italian Economy

Authors: Michele Catalano, Jacopo Di Domenico, Luca Riccetti, Andrea Teglio

Abstract:

We develop a data-driven agent based model (ABM) for the Italian economy. We calibrate the model for the initial condition and parameters. As a preliminary step, we replicate the Monte-Carlo simulation for the Austrian economy. Then, we evaluate the dynamic properties of the model: the long-run equilibrium and the allocative efficiency in terms of disequilibrium patterns arising in the search and matching process for final goods, capital, intermediate goods, and credit markets. In this perspective, we use a randomized initial condition approach. We perform a robustness analysis perturbing the system for different parameter setups. We explore the empirical properties of the model using a rolling window forecast exercise from 2010 to 2022 to observe the model’s forecasting ability in the wake of the COVID-19 pandemic. We perform an analysis of the properties of the model with a different number of agents, that is, with different scales of the model compared to the real economy. The model generally displays transient dynamics that properly fit macroeconomic data regarding forecasting ability. We stress the model with a large set of shocks, namely interest policy, fiscal policy, and exogenous factors, such as external foreign demand for export. In this way, we can explore the most exposed sectors of the economy. Finally, we modify the technology mix of the various sectors and, consequently, the underlying input-output sectoral interdependence to stress the economy and observe the long-run projections. In this way, we can include in the model the generation of endogenous crisis due to the implied structural change, technological unemployment, and potential lack of aggregate demand creating the condition for cyclical endogenous crises reproduced in this artificial economy.

Keywords: agent-based models, behavioral macro, macroeconomic forecasting, micro data

Procedia PDF Downloads 69
1389 Destination Decision Model for Cruising Taxis Based on Embedding Model

Authors: Kazuki Kamada, Haruka Yamashita

Abstract:

In Japan, taxi is one of the popular transportations and taxi industry is one of the big businesses. However, in recent years, there has been a difficult problem of reducing the number of taxi drivers. In the taxi business, mainly three passenger catching methods are applied. One style is "cruising" that drivers catches passengers while driving on a road. Second is "waiting" that waits passengers near by the places with many requirements for taxies such as entrances of hospitals, train stations. The third one is "dispatching" that is allocated based on the contact from the taxi company. Above all, the cruising taxi drivers need the experience and intuition for finding passengers, and it is difficult to decide "the destination for cruising". The strong recommendation system for the cruising taxies supports the new drivers to find passengers, and it can be the solution for the decreasing the number of drivers in the taxi industry. In this research, we propose a method of recommending a destination for cruising taxi drivers. On the other hand, as a machine learning technique, the embedding models that embed the high dimensional data to a low dimensional space is widely used for the data analysis, in order to represent the relationship of the meaning between the data clearly. Taxi drivers have their favorite courses based on their experiences, and the courses are different for each driver. We assume that the course of cruising taxies has meaning such as the course for finding business man passengers (go around the business area of the city of go to main stations) and course for finding traveler passengers (go around the sightseeing places or big hotels), and extract the meaning of their destinations. We analyze the cruising history data of taxis based on the embedding model and propose the recommendation system for passengers. Finally, we demonstrate the recommendation of destinations for cruising taxi drivers based on the real-world data analysis using proposing method.

Keywords: taxi industry, decision making, recommendation system, embedding model

Procedia PDF Downloads 138
1388 Investigations on the Influence of Web Openings on the Load Bearing Behavior of Steel Beams

Authors: Felix Eyben, Simon Schaffrath, Markus Feldmann

Abstract:

A building should maximize the potential for use through its design. Therefore, flexible use is always important when designing a steel structure. To create flexibility, steel beams with web openings are increasingly used, because these offer the advantage that cables, pipes and other technical equipment can easily be routed through without detours, allowing for more space-saving and aesthetically pleasing construction. This can also significantly reduce the height of ceiling systems. Until now, beams with web openings were not explicitly considered in the European standard. However, this is to be done with the new EN 1993-1-13, in which design rules for different opening forms are defined. In order to further develop the design concepts, beams with web openings under bending are therefore to be investigated in terms of damage mechanics as part of a German national research project aiming to optimize the verifications for steel structures based on a wider database and a validated damage prediction. For this purpose, first, fundamental factors influencing the load-bearing behavior of girders with web openings under bending load were investigated numerically without taking material damage into account. Various parameter studies were carried out for this purpose. For example, the factors under study were the opening shape, size and position as well as structural aspects as the span length, arrangement of stiffeners and loading situation. The load-bearing behavior is evaluated using resulting load-deformation curves. These results are compared with the design rules and critically analyzed. Experimental tests are also planned based on these results. Moreover, the implementation of damage mechanics in the form of the modified Bai-Wierzbicki model was examined. After the experimental tests will have been carried out, the numerical models are validated and further influencing factors will be investigated on the basis of parametric studies.

Keywords: damage mechanics, finite element, steel structures, web openings

Procedia PDF Downloads 173
1387 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru

Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar

Abstract:

Nowadays, heritage building information modeling (HBIM) is considered an efficient tool to represent and manage information of cultural heritage (CH). The basis of this tool relies on a 3D model generally obtained from a cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired level of development (LOD), level of information (LOI), grade of generation (GOG), as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit, and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings, and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills, and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models families, respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI, and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources since the BIM software used has a free student license.

Keywords: cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit

Procedia PDF Downloads 142