Search results for: GAN architecture for 2D animated cartoonizing neural style
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4056

Search results for: GAN architecture for 2D animated cartoonizing neural style

216 Architectural Identity in Manifestation of Tall-buildings' Design

Authors: Huda Arshadlamphon

Abstract:

Advancing frontiers of technology and industry is moving rapidly fast influenced by the economic and political phenomena. One vital phenomenon,which has had consolidated the world to a one single village, is Globalization. In response, architecture and the built-environment have faced numerous changes, adjustments, and developments. Tall-buildings, as a product of globalization, represent prestigious icons, symbols, and landmarks for highly economics and advanced countries. Despite the fact, this trend has been encountering several design challenges incorporating architectural identity, traditions, and characteristics that enhance the built-environments' sociocultural values and traditions. The necessity of these values and traditionsform self-solitarily, leading to visual and spatial creativity, independency, and individuality. In other words, they maintain the inherited identity and avoid replications in all means and aspects. This paper, firstly, defines globalization phenomenon, architectural identity, and the concerns of sociocultural values in relation to the traditional characteristics of the built-environment. Secondly, through three case-studies of tall-buildings located in Jeddah city, Saudi Arabia, the Queen's Building, the National Commercial Bank Building (NCB), and the Islamic Development Bank Building; design strategies and methodologies in acclimating architectural identity and characteristics in tall-buildings are discussed. The case-studies highlight buildings' sites and surroundings, concepts and inspirations, design elements, architectural forms and compositions, characteristics, issues, barriers, and trammels facing the designs' decisions, representation of facades, and selection of materials and colors. Furthermore, the research will elucidate briefs of the dominant factors that shape the architectural identity of Jeddah city. In conclusion, the study manifests four tall-buildings' design standards guideline in preserving and developing architectural identity in Jeddah city; the scale of urban and natural environment, the scale of architectural design elements, the integration of visual images, and the creation of spatial scenes and scenarios. The prosed guideline will encourage the development of architectural identity aligned with zeitgeist demands and requirements, supports the contemporary architectural movement toward tall-buildings, and shoresself-solitarily in representing sociocultural values and traditions of the built-environment.

Keywords: architectural identity, built-environment, globalization, sociocultural values and traditions, tall-buildings

Procedia PDF Downloads 139
215 Self-Esteem on University Students by Gender and Branch of Study

Authors: Antonio Casero Martínez, María de Lluch Rayo Llinas

Abstract:

This work is part of an investigation into the relationship between romantic love and self-esteem in college students, performed by the students of matter "methods and techniques of social research", of the Master Gender at the University of Balearic Islands, during 2014-2015. In particular, we have investigated the relationships that may exist between self-esteem, gender and field of study. They are known as gender differences in self-esteem, and the relationship between gender and branch of study observed annually by the distribution of enrolment in universities. Therefore, in this part of the study, we focused the spotlight on the differences in self-esteem between the sexes through the various branches of study. The study sample consists of 726 individuals (304 men and 422 women) from 30 undergraduate degrees that the University of the Balearic Islands offers on its campus in 2014-2015, academic year. The average age of men was 21.9 years and 21.7 years for women. The sampling procedure used was random sampling stratified by degree, simple affixation, giving a sampling error of 3.6% for the whole sample, with a confidence level of 95% under the most unfavorable situation (p = q). The Spanish translation of the Rosenberg Self-Esteen Scale (RSE), by Atienza, Moreno and Balaguer was applied. The psychometric properties of translation reach a test-retest reliability of 0.80 and an internal consistency between 0.76 and 0.87. In this paper we have obtained an internal consistency of 0.82. The results confirm the expected differences in self-esteem by gender, although not in all branches of study. Mean levels of self-esteem in women are lower in all branches of study, reaching statistical significance in the field of Science, Social Sciences and Law, and Engineering and Architecture. However, analysed the variability of self-esteem by the branch of study within each gender, the results show independence in the case of men, whereas in the case of women find statistically significant differences, arising from lower self-esteem of Arts and Humanities students vs. the Social and legal Sciences students. These findings confirm the results of numerous investigations in which the levels of female self-esteem appears always below the male, suggesting that perhaps we should consider separately the two populations rather than continually emphasize the difference. The branch of study, for its part has not appeared as an explanatory factor of relevance, beyond detected the largest absolute difference between gender in the technical branch, one in which women are historically a minority, ergo, are no disciplinary or academic characteristics which would explain the differences, but the differentiated social context that occurs within it.

Keywords: study branch, gender, self-esteem, applied psychology

Procedia PDF Downloads 441
214 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator

Procedia PDF Downloads 60
213 CRISPR-Mediated Genome Editing for Yield Enhancement in Tomato

Authors: Aswini M. S.

Abstract:

Tomato (Solanum lycopersicum L.) is one of the most significant vegetable crops in terms of its economic benefits. Both fresh and processed tomatoes are consumed. Tomatoes have a limited genetic base, which makes breeding extremely challenging. Plant breeding has become much simpler and more effective with genome editing tools of CRISPR and CRISPR-associated 9 protein (CRISPR/Cas9), which address the problems with traditional breeding, chemical/physical mutagenesis, and transgenics. With the use of CRISPR/Cas9, a number of tomato traits have been functionally distinguished and edited. These traits include plant architecture as well as flower characters (leaf, flower, male sterility, and parthenocarpy), fruit ripening, quality and nutrition (lycopene, carotenoid, GABA, TSS, and shelf-life), disease resistance (late blight, TYLCV, and powdery mildew), tolerance to abiotic stress (heat, drought, and salinity) and resistance to herbicides. This study explores the potential of CRISPR/Cas9 genome editing for enhancing yield in tomato plants. The study utilized the CRISPR/Cas9 genome editing technology to functionally edit various traits in tomatoes. The de novo domestication of elite features from wild cousins to cultivated tomatoes and vice versa has been demonstrated by the introgression of CRISPR/Cas9. The CycB (Lycopene beta someri) gene-mediated Cas9 editing increased the lycopene content in tomato. Also, Cas9-mediated editing of the AGL6 (Agamous-like 6) gene resulted in parthenocarpic fruit development under heat-stress conditions. The advent of CRISPR/Cas has rendered it possible to use digital resources for single guide RNA design and multiplexing, cloning (such as Golden Gate cloning, GoldenBraid, etc.), creating robust CRISPR/Cas constructs, and implementing effective transformation protocols like the Agrobacterium and DNA free protoplast method for Cas9-gRNAs ribonucleoproteins (RNPs) complex. Additionally, homologous recombination (HR)-based gene knock-in (HKI) via geminivirus replicon and base/prime editing (Target-AID technology) remains possible. Hence, CRISPR/Cas facilitates fast and efficient breeding in the improvement of tomatoes.

Keywords: CRISPR-Cas, biotic and abiotic stress, flower and fruit traits, genome editing, polygenic trait, tomato and trait introgression

Procedia PDF Downloads 47
212 Multi-Agent Searching Adaptation Using Levy Flight and Inferential Reasoning

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

In this paper, we describe how to achieve knowledge understanding and prediction (Situation Awareness (SA)) for multiple-agents conducting searching activity using Bayesian inferential reasoning and learning. Bayesian Belief Network was used to monitor agents' knowledge about their environment, and cases are recorded for the network training using expectation-maximisation or gradient descent algorithm. The well trained network will be used for decision making and environmental situation prediction. Forest fire searching by multiple UAVs was the use case. UAVs are tasked to explore a forest and find a fire for urgent actions by the fire wardens. The paper focused on two problems: (i) effective agents’ path planning strategy and (ii) knowledge understanding and prediction (SA). The path planning problem by inspiring animal mode of foraging using Lévy distribution augmented with Bayesian reasoning was fully described in this paper. Results proof that the Lévy flight strategy performs better than the previous fixed-pattern (e.g., parallel sweeps) approaches in terms of energy and time utilisation. We also introduced a waypoint assessment strategy called k-previous waypoints assessment. It improves the performance of the ordinary levy flight by saving agent’s resources and mission time through redundant search avoidance. The agents (UAVs) are to report their mission knowledge at the central server for interpretation and prediction purposes. Bayesian reasoning and learning were used for the SA and results proof effectiveness in different environments scenario in terms of prediction and effective knowledge representation. The prediction accuracy was measured using learning error rate, logarithm loss, and Brier score and the result proves that little agents mission that can be used for prediction within the same or different environment. Finally, we described a situation-based knowledge visualization and prediction technique for heterogeneous multi-UAV mission. While this paper proves linkage of Bayesian reasoning and learning with SA and effective searching strategy, future works is focusing on simplifying the architecture.

Keywords: Levy flight, distributed constraint optimization problem, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence

Procedia PDF Downloads 117
211 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 391
210 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data

Authors: Martin Pellon Consunji

Abstract:

Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.

Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms

Procedia PDF Downloads 94
209 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 137
208 Detection of Alzheimer's Protein on Nano Designed Polymer Surfaces in Water and Artificial Saliva

Authors: Sevde Altuntas, Fatih Buyukserin

Abstract:

Alzheimer’s disease is responsible for irreversible neural damage of brain parts. One of the disease markers is Amyloid-β 1-42 protein that accumulates in the brain in the form plaques. The basic problem for detection of the protein is the low amount of protein that cannot be detected properly in body liquids such as blood, saliva or urine. To solve this problem, tests like ELISA or PCR are proposed which are expensive, require specialized personnel and can contain complex protocols. Therefore, Surface-enhanced Raman Spectroscopy (SERS) a good candidate for detection of Amyloid-β 1-42 protein. Because the spectroscopic technique can potentially allow even single molecule detection from liquid and solid surfaces. Besides SERS signal can be improved by using nanopattern surface and also is specific to molecules. In this context, our study proposes to fabricate diagnostic test models that utilize Au-coated nanopatterned polycarbonate (PC) surfaces modified with Thioflavin - T to detect low concentrations of Amyloid-β 1-42 protein in water and artificial saliva medium by the enhancement of protein SERS signal. The nanopatterned PC surface that was used to enhance SERS signal was fabricated by using Anodic Alumina Membranes (AAM) as a template. It is possible to produce AAMs with different column structures and varying thicknesses depending on voltage and anodization time. After fabrication process, the pore diameter of AAMs can be arranged with dilute acid solution treatment. In this study, two different columns structures were prepared. After a surface modification to decrease their surface energy, AAMs were treated with PC solution. Following the solvent evaporation, nanopatterned PC films with tunable pillared structures were peeled off from the membrane surface. The PC film was then modified with Au and Thioflavin-T for the detection of Amyloid-β 1-42 protein. The protein detection studies were conducted first in water via this biosensor platform. Same measurements were conducted in artificial saliva to detect the presence of Amyloid Amyloid-β 1-42 protein. SEM, SERS and contact angle measurements were carried out for the characterization of different surfaces and further demonstration of the protein attachment. SERS enhancement factor calculations were also completed via experimental results. As a result, our research group fabricated diagnostic test models that utilize Au-coated nanopatterned polycarbonate (PC) surfaces modified with Thioflavin-T to detect low concentrations of Alzheimer’s Amiloid – β protein in water and artificial saliva medium. This work was supported by The Scientific and Technological Research Council of Turkey (TUBITAK) Grant No: 214Z167.

Keywords: alzheimer, anodic aluminum oxide, nanotopography, surface enhanced Raman spectroscopy

Procedia PDF Downloads 267
207 Agricultural Knowledge Management System Design, Use, and Consequence for Knowledge Sharing and Integration

Authors: Dejen Alemu, Murray E. Jennex, Temtim Assefa

Abstract:

This paper is investigated to understand the design, the use, and the consequence of Knowledge Management System (KMS) for knowledge systems sharing and integration. A KMS for knowledge systems sharing and integration is designed to meet the challenges raised by knowledge management researchers and practitioners: the technical, the human, and social factors. Agricultural KMS involves various members coming from different Communities of Practice (CoPs) who possess their own knowledge of multiple practices which need to be combined in the system development. However, the current development of the technology ignored the indigenous knowledge of the local communities, which is the key success factor for agriculture. This research employed the multi-methodological approach to KMS research in action research perspective which consists of four strategies: theory building, experimentation, observation, and system development. Using the KMS development practice of Ethiopian agricultural transformation agency as a case study, this research employed an interpretive analysis using primary qualitative data acquired through in-depth semi-structured interviews and participant observations. The Orlikowski's structuration model of technology has been used to understand the design, the use, and the consequence of the KMS. As a result, the research identified three basic components for the architecture of the shared KMS, namely, the people, the resources, and the implementation subsystems. The KMS were developed using web 2.0 tools to promote knowledge sharing and integration among diverse groups of users in a distributed environment. The use of a shared KMS allows users to access diverse knowledge from a number of users in different groups of participants, enhances the exchange of different forms of knowledge and experience, and creates high interaction and collaboration among participants. The consequences of a shared KMS on the social system includes, the elimination of hierarchical structure, enhance participation, collaboration, and negotiation among users from different CoPs having common interest, knowledge and skill development, integration of diverse knowledge resources, and the requirement of policy and guideline. The research contributes methodologically for the application of system development action research for understanding a conceptual framework for KMS development and use. The research have also theoretical contribution in extending structuration model of technology for the incorporation of variety of knowledge and practical implications to provide management understanding in developing strategies for the potential of web 2.0 tools for sharing and integration of indigenous knowledge.

Keywords: communities of practice, indigenous knowledge, participation, structuration model of technology, Web 2.0 tools

Procedia PDF Downloads 230
206 European Project Meter Matters in Sports: Fostering Criteria for Inclusion through Sport

Authors: Maria Campos, Alain Massart, Hugo Sarmento

Abstract:

The Meter Matters Erasmus Sport European Project (ID: 101050372) explores the field of social inclusion in and through sports with the aim of a) proposing appropriate criteria for co-funding sports programs involving people with intellectual and developmental disabilities and other more vulnerable people, primarily in mainstream sports organizations and b) proposing a model for co-funding social inclusion in and through sports at the national level. This European project (2022-2024) involves 6 partners from 3 countries: Univerza V Ljubljani – coordinator and Drustvo Specialna Olimpiada Slovenije (Slovenia); Magyar Specialis Olimpia Szovetseg and Magyar Testnevelesi Es Sporttudomanyi Egyetem (Hungary) and APPDA Coimbra - Associação Portuguesa para as Perturbações do Desenvolvimento e Autismo and Universidade De Coimbra, Faculty of Sport Sciences and Physical Education (Portugal). Equal involvement of all people in sports activities is, in terms of national and international guidelines, enshrined in some conventions and strategies in the field of sports, as well as human rights, social security, physical and mental health, architecture, environment and public administration. However, there is a gap between the practice and EU guidelines in terms of sustainable support for socially inclusive sports programs in the form of co-funding by state and local (municipal) resources. We observe considerable opacity in the regulation of the field. Given that there are both relevant programs and inclusive legislation and policies, we believe that the reason for the missing article is reflected in the undeveloped criteria for measuring social inclusion in sports. Major sports programs are usually co-funded based on crowds (number of involved athletes) and performance (sports score). In the field of social inclusion in sports, the criteria cannot be the same, as it is a smaller population. Therefore, the goals of inclusion in sports should not be the focused on competitive results but on opening equal opportunities for all, regardless of their psychophysical abilities. In the Meter Matters program, we are searching for criteria for co-funding social inclusion in sports through focus groups with coaches, social workers, psychologists and others professionals involved in inclusive sports programs in regular sports clubs and with athletes and their parents or guardians. Moreover, experts in the field of social inclusion in sports were also interviewed. Based on the proposals for measuring social inclusion in sports, we developed a model for co-funding socially inclusive sports programs.

Keywords: European project, meter matters, inclusion, sport

Procedia PDF Downloads 83
205 Interpretation of Heritage Revitalization

Authors: Jarot Mahendra

Abstract:

The primary objective of this paper is to provide a view in the interpretation of the revitalization of heritage buildings. This objective is achieved by analyzing the concept of interpretation that is oriented in the perspective of law, urban spatial planning, and stakeholder perspective, and then develops the theoretical framework of interpretation in the cultural resources management through issues of identity, heritage as a process, and authenticity in heritage. The revitalization of heritage buildings with the interpretation of these three issues is that interpretation can be used as a communication process to express the meaning and relation of heritage to the community so as to avoid the conflict that will arise and develop as a result of different perspectives of stakeholders. Using case studies in Indonesia, this study focuses on the revitalization of heritage sites in the National Gallery of Indonesia (GNI). GNI is a cultural institution that uses several historical buildings that have been designated as heritage and have not been designated as a heritage according to the regulations applicable in Indonesia, in carrying out its function as the center of Indonesian art development and art museums. The revitalization of heritage buildings is taken as a step to meet space needs in running the current GNI function. In the revitalization master plan, there are physical interventions on the building of heritage and the removal of some historic buildings which will then be built new buildings at that location. The research matrix was used to map out the main elements of the study (the concept of GNI revitalization, heritage as identity, heritage as a process, and authenticity in the heritage). Expert interviews and document studies are the main tools used in collecting data. Qualitative data is then analyzed through content analysis and template analysis. This study identifies the significance of historic buildings (heritage buildings and buildings not defined as heritage) as an important value of history, architecture, education, and culture. The significance becomes the basis for revisiting the revitalization master plan which is then reviewed according to applicable regulations and the spatial layout of Jakarta. The interpretation that is built is (1) GNI is one of the elements of the embodiment of the National Cultural Center in the context of the region, where there are National Monument, National Museum and National Library in the same area, so the heritage not only gives identity to the past culture but the culture of current community; (2) The heritage should be seen as a dynamic cultural process towards the cultural change of community, where heritage must develop along with the urban development, so that the heritage buildings can remain alive and side by side with modern buildings but still observe the principles of preservation of heritage; (3) The authenticity of heritage should be able to balance the cultural heritage conservation approach with urban development, where authenticity can serve as a 'Value Transmitter' so that authenticity can be used to evaluate, preserve and manage heritage buildings by considering tangible and intangible aspects.

Keywords: authenticity, culture process, identity, interpretation, revitalization

Procedia PDF Downloads 126
204 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 96
203 Modeling the Acquisition of Expertise in a Sequential Decision-Making Task

Authors: Cristóbal Moënne-Loccoz, Rodrigo C. Vergara, Vladimir López, Domingo Mery, Diego Cosmelli

Abstract:

Our daily interaction with computational interfaces is plagued of situations in which we go from inexperienced users to experts through self-motivated exploration of the same task. In many of these interactions, we must learn to find our way through a sequence of decisions and actions before obtaining the desired result. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion so that a specific sequence of actions must be performed in order to produce the expected outcome. But, as they become experts in the use of such interfaces, do users adopt specific search and learning strategies? Moreover, if so, can we use this information to follow the process of expertise development and, eventually, predict future actions? This would be a critical step towards building truly adaptive interfaces that can facilitate interaction at different moments of the learning curve. Furthermore, it could provide a window into potential mechanisms underlying decision-making behavior in real world scenarios. Here we tackle this question using a simple game interface that instantiates a 4-level binary decision tree (BDT) sequential decision-making task. Participants have to explore the interface and discover an underlying concept-icon mapping in order to complete the game. We develop a Hidden Markov Model (HMM)-based approach whereby a set of stereotyped, hierarchically related search behaviors act as hidden states. Using this model, we are able to track the decision-making process as participants explore, learn and develop expertise in the use of the interface. Our results show that partitioning the problem space into such stereotyped strategies is sufficient to capture a host of exploratory and learning behaviors. Moreover, using the modular architecture of stereotyped strategies as a Mixture of Experts, we are able to simultaneously ask the experts about the user's most probable future actions. We show that for those participants that learn the task, it becomes possible to predict their next decision, above chance, approximately halfway through the game. Our long-term goal is, on the basis of a better understanding of real-world decision-making processes, to inform the construction of interfaces that can establish dynamic conversations with their users in order to facilitate the development of expertise.

Keywords: behavioral modeling, expertise acquisition, hidden markov models, sequential decision-making

Procedia PDF Downloads 227
202 Exploring Coexisting Opportunity of Earthquake Risk and Urban Growth

Authors: Chang Hsueh-Sheng, Chen Tzu-Ling

Abstract:

Earthquake is an unpredictable natural disaster and intensive earthquakes have caused serious impacts on social-economic system, environmental and social resilience, and further increase vulnerability. Due to earthquakes do not kill people, buildings do. When buildings located nearby earthquake-prone areas and constructed upon poorer soil areas might result in earthquake-induced ground damage. In addition, many existing buildings built before any improved seismic provisions began to be required in building codes and inappropriate land usage with highly dense population might result in much serious earthquake disaster. Indeed, not only do earthquake disaster impact seriously on urban environment, but urban growth might increase the vulnerability. Since 1980s, ‘Cutting down risks and vulnerability’ has been brought up in both urban planning and architecture and such concept has way beyond retrofitting of seismic damages, seismic resistance, and better anti-seismic structures, and become the key action on disaster mitigation. Land use planning and zoning are two critical non-structural measures on controlling physical development while it is difficult for zoning boards and governing bodies restrict development of questionable lands to uses compatible with the hazard without credible earthquake loss projection. Therefore, identifying potential earthquake exposure, vulnerability people and places, and urban development areas might become strongly supported information for decision makers. Taiwan locates on the Pacific Ring of Fire where a seismically active zone is. Some of the active faults have been found close by densely populated and highly developed built environment in the cities. Therefore, this study attempts to base on the perspective of carrying capacity and draft out micro-zonation according to both vulnerability index and urban growth index while considering spatial variances of multi factors via geographical weighted principle components (GWPCA). The purpose in this study is to construct supported information for decision makers on revising existing zoning in high-risk areas for a more compatible use and the public on managing risks.

Keywords: earthquake disaster, vulnerability, urban growth, carrying capacity, /geographical weighted principle components (GWPCA), bivariate spatial association statistic

Procedia PDF Downloads 231
201 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow

Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat

Abstract:

Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.

Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement

Procedia PDF Downloads 71
200 Economic Evaluation of Degradation by Corrosion of an On-Grid Battery Energy Storage System: A Case Study in Algeria Territory

Authors: Fouzia Brihmat

Abstract:

Economic planning models, which are used to build microgrids and distributed energy resources, are the current norm for expressing such confidence (DER). These models often decide both short-term DER dispatch and long-term DER investments. This research investigates the most cost-effective hybrid (photovoltaic-diesel) renewable energy system (HRES) based on Total Net Present Cost (TNPC) in an Algerian Saharan area, which has a high potential for solar irradiation and has a production capacity of 1GW/h. Lead-acid batteries have been around much longer and are easier to understand, but have limited storage capacity. Lithium-ion batteries last longer, are lighter, but generally more expensive. By combining the advantages of each chemistry, we produce cost-effective high-capacity battery banks that operate solely on AC coupling. The financial implications of this research describe the corrosion process that occurs at the interface between the active material and grid material of the positive plate of a lead-acid battery. The best cost study for the HRES is completed with the assistance of the HOMER Pro MATLAB Link. Additionally, during the course of the project's 20 years, the system is simulated for each time step. In this model, which takes into consideration decline in solar efficiency, changes in battery storage levels over time, and rises in fuel prices above the rate of inflation. The trade-off is that the model is more accurate, but it took longer to compute. As a consequence, the model is more precise, but the computation takes longer. We initially utilized the Optimizer to run the model without MultiYear in order to discover the best system architecture. The optimal system for the single-year scenario is the Danvest generator, which has 760 kW, 200 kWh of the necessary quantity of lead-acid storage, and a somewhat lower COE of $0.309/kWh. Different scenarios that account for fluctuations in the gasified biomass generator's production of electricity have been simulated, and various strategies to guarantee the balance between generation and consumption have been investigated. The technological optimization of the same system has been finished and is being reviewed in a recent paper study.

Keywords: battery, corrosion, diesel, economic planning optimization, hybrid energy system, lead-acid battery, multi-year planning, microgrid, price forecast, PV, total net present cost

Procedia PDF Downloads 67
199 Sexuality Education through Media and Technology: Addressing Unmet Needs of Adolescents in Bangladesh

Authors: Farhana Alam Bhuiyan, Saad Khan, Tanveer Hassan, Jhalok Ranjon Talukder, Syeda Farjana Ahmed, Rahil Roodsaz, Els Rommes, Sabina Faiz Rashid

Abstract:

Breaking the shame’ is a 3 year (2015-2018) qualitative implementation research project which investigates several aspects of sexual and reproductive health and rights (SRHR) issues for adolescents living in Bangladesh. Scope of learning SRHR issues for adolescents is limited here due to cultural and religious taboos. This study adds to the ongoing discussions around adolescent’s SRHR needs and aims to, 1) understand the overall SRHR needs of urban and rural unmarried female and male adolescents and the challenges they face, 2) explore existing gaps in the content of SRHR curriculum and 3) finally, addresses some critical knowledge gaps by developing and implementing innovative SRHR educational materials. 18 in-depth interviews (IDIs) and 10 focus-group discussions (FGDs) with boys and 21 IDIs and 14 FGDs with girls of ages 13-19, from both urban and rural setting took place. Curriculum materials from two leading organizations, Unite for Body Rights (UBR) Alliance Bangladesh and BRAC Adolescent Development Program (ADP) were also reviewed, with discussions with 12 key program staff. This paper critically analyses the relevance of some of the SRHR topics that are covered, the challenges with existing pedagogic approaches and key sexuality issues that are not covered in the content, but are important for adolescents. Adolescents asked for content and guidance on a number of topics which remain missing from the core curriculum, such as emotional coping mechanisms particularly in relationships, bullying, impact of exposure to porn, and sexual performance anxiety. Other core areas of concern were effects of masturbation, condom use, sexual desire and orientation, which are mentioned in the content, but never discussed properly, resulting in confusion. Due to lack of open discussion around sexuality, porn becomes a source of information for the adolescents. For these reasons, several myths and misconceptions regarding SRHR issues like body, sexuality, agency, and gender roles still persist. The pedagogical approach is very didactic, and teachers felt uncomfortable to have discussions on certain SRHR topics due to cultural taboos or shame and stigma. Certain topics are favored- such as family planning, menstruation- and presented with an emphasis on biology and risk. Rigid formal teaching style, hierarchical power relations between students and most teachers discourage questions and frank conversations. Pedagogy approaches within classrooms play a critical role in the sharing of knowledge. The paper also describes the pilot approaches to implementing new content in SRHR curriculum. After a review of findings, three areas were selected as critically important, 1) myths and misconceptions 2) emotional management challenges, and 3) how to use condom, that have come up from adolescents. Technology centric educational materials such as web page based information platform and you tube videos are opted for which allow adolescents to bypass gatekeepers and learn facts and information from a legitimate educational site. In the era of social media, when information is always a click away, adolescents need sources that are reliable and not overwhelming. The research aims to ensure that adolescents learn and apply knowledge effectively, through creating the new materials and making it accessible to adolescents.

Keywords: adolescents, Bangladesh, media, sexuality education, unmet needs

Procedia PDF Downloads 202
198 Body, Experience, Sense, and Place: Past and Present Sensory Mappings of Istiklal Street in Istanbul

Authors: Asiye Nisa Kartal

Abstract:

An attempt to recognize the undiscovered bounds of Istiklal Street in Istanbul between its sensory experiences (intangible qualities) and physical setting (tangible qualities) could be taken as the first inspiration point for this study. ‘The dramatic physical changes’ and ‘their current impacts on sensory attributions’ of Istiklal Street have directed this study to consider the role of changing the physical layout on sensory dimensions which have a subtle but important role in the examination of urban places. The public places have always been subject to transformation, so in the last years, the changing socio-cultural structure, economic and political movements, law and city regulations, innovative transportation and communication activities have resulted in a controversial modification of Istanbul. And, as the culture, entertainment, tourism, and shopping focus of Istanbul, Istiklal Street has witnessed different changing stages within the last years. In this process, because of the projects being implemented, many buildings such as cinemas, theatres, and bookstores have restored, moved, converted, closed and demolished which have been significant elements in terms of the qualitative value of this area. And, the multi-layered socio-cultural, and architectural structure of Istiklal Street has been changing in a dramatical and controversial way. But importantly, while the physical setting of Istiklal Street has changed, the transformation has not been spatial, socio-cultural, economic; avoidably the sensory dimensions of Istiklal Street which have great importance in terms of intangible qualities of this area have begun to lose their distinctive features. This has created the challenge of this research. As the main hypothesis, this study claims that the physical transformations have led to change in the sensory characteristic of Istiklal Street, therefore the Sensescape of Istiklal Street deserve to be recorded, decoded and promoted as expeditiously as possible to observe the sensory reflections of physical transformations in this area. With the help of the method of ‘Sensewalking’ which is an efficient research tool to generate knowledge on sensory dimensions of an urban settlement, this study suggests way of ‘mapping’ to understand how do ‘changes of physical setting’ play role on ‘sensory qualities’ of Istiklal Street which have been changed or lost over time. Basically, this research focuses on the sensory mapping of Istiklal Street from the 1990s until today to picture, interpret, criticize the ‘sensory mapping of Istiklal Street in present’ and the ‘sensory mapping of Istiklal Street in past’. Through the sensory mapping of Istiklal Street, this study intends to increase the awareness about the distinctive sensory qualities of places. It is worthwhile for further studies that consider the sensory dimensions of places especially in the field of architecture.

Keywords: Istiklal street, sense, sensewalking, sensory mapping

Procedia PDF Downloads 146
197 Building Exoskeletons for Seismic Retrofitting

Authors: Giuliana Scuderi, Patrick Teuffel

Abstract:

The proven vulnerability of the existing social housing building heritage to natural or induced earthquakes requires the development of new design concepts and conceptual method to preserve materials and object, at the same time providing new performances. An integrate intervention between civil engineering, building physics and architecture can convert the social housing districts from a critical part of the city to a strategic resource of revitalization. Referring to bio-mimicry principles the present research proposes a taxonomy with the exoskeleton of the insect, an external, light and resistant armour whose role is to protect the internal organs from external potentially dangerous inputs. In the same way, a “building exoskeleton”, acting from the outside of the building as an enclosing cage, can restore, protect and support the existing building, assuming a complex set of roles, from the structural to the thermal, from the aesthetical to the functional. This study evaluates the structural efficiency of shape memory alloys devices (SMADs) connecting the “building exoskeleton” with the existing structure to rehabilitate, in order to prevent the out-of-plane collapse of walls and for the passive dissipation of the seismic energy, with a calibrated operability in relation to the intensity of the horizontal loads. The two case studies of a masonry structure and of a masonry structure with concrete frame are considered, and for each case, a theoretical social housing building is exposed to earthquake forces, to evaluate its structural response with or without SMADs. The two typologies are modelled with the finite element program SAP2000, and they are respectively defined through a “frame model” and a “diagonal strut model”. In the same software two types of SMADs, called the 00-10 SMAD and the 05-10 SMAD are defined, and non-linear static and dynamic analyses, namely push over analysis and time history analysis, are performed to evaluate the seismic response of the building. The effectiveness of the devices in limiting the control joint displacements resulted higher in one direction, leading to the consideration of a possible calibrated use of the devices in the different walls of the building. The results show also a higher efficiency of the 00-10 SMADs in controlling the interstory drift, but at the same time the necessity to improve the hysteretic behaviour, to maximise the passive dissipation of the seismic energy.

Keywords: adaptive structure, biomimetic design, building exoskeleton, social housing, structural envelope, structural retrofitting

Procedia PDF Downloads 401
196 SAFECARE: Integrated Cyber-Physical Security Solution for Healthcare Critical Infrastructure

Authors: Francesco Lubrano, Fabrizio Bertone, Federico Stirano

Abstract:

Modern societies strongly depend on Critical Infrastructures (CI). Hospitals, power supplies, water supplies, telecommunications are just few examples of CIs that provide vital functions to societies. CIs like hospitals are very complex environments, characterized by a huge number of cyber and physical systems that are becoming increasingly integrated. Ensuring a high level of security within such critical infrastructure requires a deep knowledge of vulnerabilities, threats, and potential attacks that may occur, as well as defence and prevention or mitigation strategies. The possibility to remotely monitor and control almost everything is pushing the adoption of network-connected devices. This implicitly introduces new threats and potential vulnerabilities, posing a risk, especially to those devices connected to the Internet. Modern medical devices used in hospitals are not an exception and are more and more being connected to enhance their functionalities and easing the management. Moreover, hospitals are environments with high flows of people, that are difficult to monitor and can somehow easily have access to the same places used by the staff, potentially creating damages. It is therefore clear that physical and cyber threats should be considered, analysed, and treated together as cyber-physical threats. This means that an integrated approach is required. SAFECARE, an integrated cyber-physical security solution, tries to respond to the presented issues within healthcare infrastructures. The challenge is to bring together the most advanced technologies from the physical and cyber security spheres, to achieve a global optimum for systemic security and for the management of combined cyber and physical threats and incidents and their interconnections. Moreover, potential impacts and cascading effects are evaluated through impact propagation models that rely on modular ontologies and a rule-based engine. Indeed, SAFECARE architecture foresees i) a macroblock related to cyber security field, where innovative tools are deployed to monitor network traffic, systems and medical devices; ii) a physical security macroblock, where video management systems are coupled with access control management, building management systems and innovative AI algorithms to detect behavior anomalies; iii) an integration system that collects all the incoming incidents, simulating their potential cascading effects, providing alerts and updated information regarding assets availability.

Keywords: cyber security, defence strategies, impact propagation, integrated security, physical security

Procedia PDF Downloads 142
195 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques

Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo

Abstract:

Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.

Keywords: air pollution, air quality modelling, data mining, particulate matter

Procedia PDF Downloads 235
194 Systematic Study of Structure Property Relationship in Highly Crosslinked Elastomers

Authors: Natarajan Ramasamy, Gurulingamurthy Haralur, Ramesh Nivarthu, Nikhil Kumar Singha

Abstract:

Elastomers are polymeric materials with varied backbone architectures ranging from linear to dendrimeric structures and wide varieties of monomeric repeat units. These elastomers show strongly viscous and weakly elastic when it is not cross-linked. But when crosslinked, based on the extent the properties of these elastomers can range from highly flexible to highly stiff nature. Lightly cross-linked systems are well studied and reported. Understanding the nature of highly cross-linked rubber based upon chemical structure and architecture is critical for varieties of applications. One of the critical parameters is cross-link density. In the current work, we have studied the highly cross-linked state of linear, lightly branched to star-shaped branched elastomers and determined the cross-linked density by using different models. Change in hardness, shift in Tg, change in modulus and swelling behavior were measured experimentally as a function of the extent of curing. These properties were analyzed using varied models to determine cross-link density. We used hardness measurements to examine cure time. Hardness to the extent of curing relationship is determined. It is well known that micromechanical transitions like Tg and storage modulus are related to the extent of crosslinking. The Tg of the elastomer in different crosslinked state was determined by DMA, and based on plateau modulus the crosslink density is estimated by using Nielsen’s model. Usually for lightly crosslinked systems, based on equilibrium swelling ratio in solvent the cross link density is estimated by using Flory–Rhener model. When it comes to highly crosslinked system, Flory-Rhener model is not valid because of smaller chain length. So models based on the assumption of polymer as a Non-Gaussian chain like 1) Helmis–Heinrich–Straube (HHS) model, 2) Gloria M.gusler and Yoram Cohen Model, 3) Barbara D. Barr-Howell and Nikolaos A. Peppas model is used for estimating crosslink density. In this work, correction factors are determined to the existing models and based upon it structure-property relationship of highly crosslinked elastomers was studied.

Keywords: dynamic mechanical analysis, glass transition temperature, parts per hundred grams of rubber, crosslink density, number of networks per unit volume of elastomer

Procedia PDF Downloads 140
193 Exploration of Building Information Modelling Software to Develop Modular Coordination Design Tool for Architects

Authors: Muhammad Khairi bin Sulaiman

Abstract:

The utilization of Building Information Modelling (BIM) in the construction industry has provided an opportunity for designers in the Architecture, Engineering and Construction (AEC) industry to proceed from the conventional method of using manual drafting to a way that creates alternative designs quickly, produces more accurate, reliable and consistent outputs. By using BIM Software, designers can create digital content that manipulates the use of data using the parametric model of BIM. With BIM software, more alternative designs can be created quickly and design problems can be explored further to produce a better design faster than conventional design methods. Generally, BIM is used as a documentation mechanism and has not been fully explored and utilised its capabilities as a design tool. Relative to the current issue, Modular Coordination (MC) design as a sustainable design practice is encouraged since MC design will reduce material wastage through standard dimensioning, pre-fabrication, repetitive, modular construction and components. However, MC design involves a complex process of rules and dimensions. Therefore, a tool is needed to make this process easier. Since the parameters in BIM can easily be manipulated to follow MC rules and dimensioning, thus, the integration of BIM software with MC design is proposed for architects during the design stage. With this tool, there will be an improvement in acceptance and practice in the application of MC design effectively. Consequently, this study will analyse and explore the function and customization of BIM objects and the capability of BIM software to expedite the application of MC design during the design stage for architects. With this application, architects will be able to create building models and locate objects within reference modular grids that adhere to MC rules and dimensions. The parametric modeling capabilities of BIM will also act as a visual tool that will further enhance the automation of the 3-Dimensional space planning modeling process. (Method) The study will first analyze and explore the parametric modeling capabilities of rule-based BIM objects, which eventually customize a reference grid within the rules and dimensioning of MC. Eventually, the approach will further enhance the architect's overall design process and enable architects to automate complex modeling, which was nearly impossible before. A prototype using a residential quarter will be modeled. A set of reference grids guided by specific MC rules and dimensions will be used to develop a variety of space planning and configuration. With the use of the design, the tool will expedite the design process and encourage the use of MC Design in the construction industry.

Keywords: building information modeling, modular coordination, space planning, customization, BIM application, MC space planning

Procedia PDF Downloads 61
192 Enhanced Furfural Extraction from Aqueous Media Using Neoteric Hydrophobic Solvents

Authors: Ahmad S. Darwish, Tarek Lemaoui, Hanifa Taher, Inas M. AlNashef, Fawzi Banat

Abstract:

This research reports a systematic top-down approach for designing neoteric hydrophobic solvents –particularly, deep eutectic solvents (DES) and ionic liquids (IL)– as furfural extractants from aqueous media for the application of sustainable biomass conversion. The first stage of the framework entailed screening 32 neoteric solvents to determine their efficacy against toluene as the application’s conventional benchmark for comparison. The selection criteria for the best solvents encompassed not only their efficiency in extracting furfural but also low viscosity and minimal toxicity levels. Additionally, for the DESs, their natural origins, availability, and biodegradability were also taken into account. From the screening pool, two neoteric solvents were selected: thymol:decanoic acid 1:1 (Thy:DecA) and trihexyltetradecyl phosphonium bis(trifluoromethylsulfonyl) imide [P₁₄,₆,₆,₆][NTf₂]. These solvents outperformed the toluene benchmark, achieving efficiencies of 94.1% and 97.1% respectively, compared to toluene’s 81.2%, while also possessing the desired properties. These solvents were then characterized thoroughly in terms of their physical properties, thermal properties, critical properties, and cross-contamination solubilities. The selected neoteric solvents were then extensively tested under various operating conditions, and an exceptional stable performance was exhibited, maintaining high efficiency across a broad range of temperatures (15–100 °C), pH levels (1–13), and furfural concentrations (0.1–2.0 wt%) with a remarkable equilibrium time of only 2 minutes, and most notably, demonstrated high efficiencies even at low solvent-to-feed ratios. The durability of the neoteric solvents was also validated to be stable over multiple extraction-regeneration cycles, with limited leachability to the aqueous phase (≈0.1%). Moreover, the extraction performance of the solvents was then modeled through machine learning, specifically multiple non-linear regression (MNLR) and artificial neural networks (ANN). The models demonstrated high accuracy, indicated by their low absolute average relative deviations with values of 2.74% and 2.28% for Thy:DecA and [P₁₄,₆,₆,₆][NTf₂], respectively, using MNLR, and 0.10% for Thy:DecA and 0.41% for [P₁₄,₆,₆,₆][NTf₂] using ANN, highlighting the significantly enhanced predictive accuracy of the ANN. The neoteric solvents presented herein offer noteworthy advantages over traditional organic solvents, including their high efficiency in both extraction and regeneration processes, their stability and minimal leachability, making them particularly suitable for applications involving aqueous media. Moreover, these solvents are more environmentally friendly, incorporating renewable and sustainable components like thymol and decanoic acid. This exceptional efficacy of the newly developed neoteric solvents signifies a significant advancement, providing a green and sustainable alternative for furfural production from biowaste.

Keywords: sustainable biomass conversion, furfural extraction, ionic liquids, deep eutectic solvents

Procedia PDF Downloads 39
191 Historiography of European Urbanism in the 20th Century in Slavic Languages

Authors: Aliaksandr Shuba, Max Welch Guerra, Martin Pekar

Abstract:

The research is dedicated to the Historiography of European urbanism in the 20th century with its critical analysis of transnational oriented sources in Slavic languages. The goal of this research was to give an overview of Slavic sources on this subject. In the research, historians, who wrote in influential historiographies on architecture and urbanism in the 20th century history in Slavic languages from Eastern, Central and South-eastern Europe, are analysed. The analysis of historiographies in Slavic languages includes diverse sources from around Europe with authors, who examined European Urbanism in the 20th century through a global prism of or their own perspectives. The main publications are from the second half of the 20th century and the early 21st century with Soviet and Post-Soviet discourses. The necessity to analyse Slavic sources was a result of historiography of urbanism establishment as a discipline in the 20th century and by the USSR, Czechslovak, and Yugoslavian academics, who created strong historiographic bases for a development of their urban historiographic schools for wide studies and analysis of architectural and urban ideas and projects with their history in the early 1970s. That is analyzed in this research within Slavic publications, which often have different perspectives and discourses to Anglo-Saxon, and these bibliographic sources can bring a diversity of new ideas in contemporary academic discourse of the European urban historiography. The publications in Slavic languages are analyzed according to the following aspects: where, when, which types, by whom, and to whom the sources were written. The critical analysis of essential sources on the Historiography of European urbanism in the 20th century with an accomplishment through their comparison and interpretation. The authors’ autonomy is analysed as a central point, along with the influence of the Communist Party and state control on the interpretation of the history of urbanism in Central, Eastern and South-eastern Europe with the main dominant topics and ideas from the second half of the 20th century. Cross-national Slavic Historiographic sources and their perspectives are compared to the main transnational Anglo-Saxon Historiographic topics as some of the dominant subjects are hypothetically similar and others have more local or national oriented directions. Some of the dominant subjects, topics, and subtopics are hypothetically similar, while the others have more local or national oriented directions because of the authors’ autonomy and influences of the Communist Party with the state control in Slavic Socialists countries that were illustrated in this research.

Keywords: European urbanism, historiography, different perspectives, 20th century

Procedia PDF Downloads 146
190 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 110
189 Emissions and Total Cost of Ownership Assessment of Hybrid Propulsion Concepts for Bus Transport with Compressed Natural Gases or Diesel Engine

Authors: Volker Landersheim, Daria Manushyna, Thinh Pham, Dai-Duong Tran, Thomas Geury, Omar Hegazy, Steven Wilkins

Abstract:

Air pollution is one of the emerging problems in our society. Targets of reduction of CO₂ emissions address low-carbon and resource-efficient transport. (Plug-in) hybrid electric propulsion concepts offer the possibility to reduce total cost of ownership (TCO) and emissions for public transport vehicles (e.g., bus application). In this context, typically, diesel engines are used to form the hybrid propulsion system of the vehicle. Though the technological development of diesel engines experience major advantages, some challenges such as the high amount of particle emissions remain relevant. Gaseous fuels (i.e., compressed natural gases (CNGs) or liquefied petroleum gases (LPGs) represent an attractive alternative to diesel because of their composition. In the framework of the research project 'Optimised Real-world Cost-Competitive Modular Hybrid Architecture' (ORCA), which was funded by the EU, two different hybrid-electric propulsion concepts have been investigated: one using a diesel engine as internal combustion engine and one using CNG as fuel. The aim of the current study is to analyze specific benefits for the aforementioned hybrid propulsion systems for predefined driving scenarios with regard to emissions and total cost of ownership in bus application. Engine models based on experimental data for diesel and CNG were developed. For the purpose of designing optimal energy management strategies for each propulsion system, maps-driven or quasi-static models for specific engine types are used in the simulation framework. An analogous modelling approach has been chosen to represent emissions. This paper compares the two concepts regarding their CO₂ and NOx emissions. This comparison is performed for relevant bus missions (urban, suburban, with and without zero-emission zone) and with different energy management strategies. In addition to the emissions, also the downsizing potential of the combustion engine has been analysed to minimize the powertrain TCO (pTCO) for plug-in hybrid electric buses. The results of the performed analyses show that the hybrid vehicle concept using the CNG engine shows advantages both with respect to emissions as well as to pTCO. The pTCO is 10% lower, CO₂ emissions are 13% lower, and the NOx emissions are more than 50% lower than with the diesel combustion engine. These results are consistent across all usage profiles under investigation.

Keywords: bus transport, emissions, hybrid propulsion, pTCO, CNG

Procedia PDF Downloads 117
188 The Neuropsychology of Obsessive Compulsion Disorder

Authors: Mia Bahar, Özlem Bozkurt

Abstract:

Obsessive-compulsive disorder (OCD) is a typical, persistent, and long-lasting mental health condition in which a person experiences uncontrollable, recurrent thoughts (or "obsessions") and/or activities (or "compulsions") that they feel compelled to engage in repeatedly. Obsessive-compulsive disorder is both underdiagnosed and undertreated. It frequently manifests in a variety of medical settings and is persistent, expensive, and burdensome. Obsessive-compulsive neurosis was long believed to be a condition that offered valuable insight into the inner workings of the unconscious mind. Obsessive-compulsive disorder is now recognized as a prime example of a neuropsychiatric condition susceptible to particular pharmacotherapeutic and psychotherapy therapies and mediated by pathology in particular neural circuits. An obsessive-compulsive disorder which is called OCD, usually has two components, one cognitive and the other behavioral, although either can occur alone. Obsessions are often repetitive and intrusive thoughts that invade consciousness. These obsessions are incredibly hard to control or dismiss. People who have OCD often engage in rituals to reduce anxiety associated with intrusive thoughts. Once the ritual is formed, the person may feel extreme relief and be free from anxiety until the thoughts of contamination intrude once again. These thoughts are strengthened through a manifestation of negative reinforcement because they allow the person to avoid anxiety and obscurity. These thoughts are described as autogenous, meaning they most likely come from nowhere. These unwelcome thoughts are related to actions which we can describe as Thought Action Fusion. The thought becomes equated with an action, such as if they refuse to perform the ritual, something bad might happen, and so people perform the ritual to escape the intrusive thought. In almost all cases of OCD, the person's life gets extremely disturbed by compulsions and obsessions. Studies show OCD is an estimated 1.1% prevalence, making it a challenging issue with high co-morbidities with other issues like depressive episodes, panic disorders, and specific phobias. The first to reveal brain anomalies in OCD were numerous CT investigations, although the results were inconsistent. A few studies have focused on the orbitofrontal cortex (OFC), anterior cingulate gyrus (AC), and thalamus, structures also implicated in the pathophysiology of OCD by functional neuroimaging studies, but few have found consistent results. However, some studies have found abnormalities in the basal ganglion. There have also been some discussions that OCD might be genetic. OCD has been linked to families in studies of family aggregation, and findings from twin studies show that this relationship is somewhat influenced by genetic variables. Some Research has shown that OCD is a heritable, polygenic condition that can result from de novo harmful mutations as well as common and unusual variants. Numerous studies have also presented solid evidence in favor of a significant additive genetic component to OCD risk, with distinct OCD symptom dimensions showing both common and individual genetic risks.

Keywords: compulsions, obsessions, neuropsychiatric, genetic

Procedia PDF Downloads 48
187 Soybean Seed Composition Prediction From Standing Crops Using Planet Scope Satellite Imagery and Machine Learning

Authors: Supria Sarkar, Vasit Sagan, Sourav Bhadra, Meghnath Pokharel, Felix B.Fritschi

Abstract:

Soybean and their derivatives are very important agricultural commodities around the world because of their wide applicability in human food, animal feed, biofuel, and industries. However, the significance of soybean production depends on the quality of the soybean seeds rather than the yield alone. Seed composition is widely dependent on plant physiological properties, aerobic and anaerobic environmental conditions, nutrient content, and plant phenological characteristics, which can be captured by high temporal resolution remote sensing datasets. Planet scope (PS) satellite images have high potential in sequential information of crop growth due to their frequent revisit throughout the world. In this study, we estimate soybean seed composition while the plants are in the field by utilizing PlanetScope (PS) satellite images and different machine learning algorithms. Several experimental fields were established with varying genotypes and different seed compositions were measured from the samples as ground truth data. The PS images were processed to extract 462 hand-crafted vegetative and textural features. Four machine learning algorithms, i.e., partial least squares (PLSR), random forest (RFR), gradient boosting machine (GBM), support vector machine (SVM), and two recurrent neural network architectures, i.e., long short-term memory (LSTM) and gated recurrent unit (GRU) were used in this study to predict oil, protein, sucrose, ash, starch, and fiber of soybean seed samples. The GRU and LSTM architectures had two separate branches, one for vegetative features and the other for textures features, which were later concatenated together to predict seed composition. The results show that sucrose, ash, protein, and oil yielded comparable prediction results. Machine learning algorithms that best predicted the six seed composition traits differed. GRU worked well for oil (R-Squared: of 0.53) and protein (R-Squared: 0.36), whereas SVR and PLSR showed the best result for sucrose (R-Squared: 0.74) and ash (R-Squared: 0.60), respectively. Although, the RFR and GBM provided comparable performance, the models tended to extremely overfit. Among the features, vegetative features were found as the most important variables compared to texture features. It is suggested to utilize many vegetation indices for machine learning training and select the best ones by using feature selection methods. Overall, the study reveals the feasibility and efficiency of PS images and machine learning for plot-level seed composition estimation. However, special care should be given while designing the plot size in the experiments to avoid mixed pixel issues.

Keywords: agriculture, computer vision, data science, geospatial technology

Procedia PDF Downloads 106