Search results for: evolved bat algorithm
2215 A Multi-Modal Virtual Walkthrough of the Virtual Past and Present Based on Panoramic View, Crowd Simulation and Acoustic Heritage on Mobile Platform
Authors: Lim Chen Kim, Tan Kian Lam, Chan Yi Chee
Abstract:
This research presents a multi-modal simulation in the reconstruction of the past and the construction of present in digital cultural heritage on mobile platform. In bringing the present life, the virtual environment is generated through a presented scheme for rapid and efficient construction of 360° panoramic view. Then, acoustical heritage model and crowd model are presented and improvised into the 360° panoramic view. For the reconstruction of past life, the crowd is simulated and rendered in an old trading port. However, the keystone of this research is in a virtual walkthrough that shows the virtual present life in 2D and virtual past life in 3D, both in an environment of virtual heritage sites in George Town through mobile device. Firstly, the 2D crowd is modelled and simulated using OpenGL ES 1.1 on mobile platform. The 2D crowd is used to portray the present life in 360° panoramic view of a virtual heritage environment based on the extension of Newtonian Laws. Secondly, the 2D crowd is animated and rendered into 3D with improved variety and incorporated into the virtual past life using Unity3D Game Engine. The behaviours of the 3D models are then simulated based on the enhancement of the classical model of Boid algorithm. Finally, a demonstration system is developed and integrated with the models, techniques and algorithms of this research. The virtual walkthrough is demonstrated to a group of respondents and is evaluated through the user-centred evaluation by navigating around the demonstration system. The results of the evaluation based on the questionnaires have shown that the presented virtual walkthrough has been successfully deployed through a multi-modal simulation and such a virtual walkthrough would be particularly useful in a virtual tour and virtual museum applications.Keywords: Boid Algorithm, Crowd Simulation, Mobile Platform, Newtonian Laws, Virtual Heritage
Procedia PDF Downloads 2772214 Exploring the Intrinsic Ecology and Suitable Density of Historic Districts Through a Comparative Analysis of Ancient and Modern Ecological Smart Practices
Authors: Hu Changjuan, Gong Cong, Long Hao
Abstract:
Although urban ecological policies and the public's aspiration for livable environments have expedited the pace of ecological revitalization, historic districts that have evolved through natural ecological processes often become obsolete and less habitable amid rapid urbanization. This raises a critical question about historic districts inherently incapable of being ecological and livable. The thriving concept of ‘intrinsic ecology,’ characterized by its ability to transform city-district systems into healthy ecosystems with diverse environments, stable functions, and rapid restoration capabilities, holds potential for guiding the integration of ancient and modern ecological wisdom while supporting the dynamic involvement of cultures. This study explores the intrinsic ecology of historic districts from three aspects: 1) Population Density: By comparing the population density before urban population expansion to the present day, determine the reasonable population density for historic districts. 2) Building Density: Using the ‘Space-mate’ tool for comparative analysis, form a spatial matrix to explore the intrinsic ecology of building density in Chinese historic districts. 3) Green Capacity Ratio: By using ecological districts as control samples, conduct dual comparative analyses (related comparison and upgraded comparison) to determine the intrinsic ecological advantages of the two-dimensional and three-dimensional green volume in historic districts. The study inform a density optimization strategy that supports cultural, social, natural, and economic ecology, contributing to the creation of eco-historic districts.Keywords: eco-historic districts, intrinsic ecology, suitable density, green capacity ratio.
Procedia PDF Downloads 232213 Heliport Remote Safeguard System Based on Real-Time Stereovision 3D Reconstruction Algorithm
Authors: Ł. Morawiński, C. Jasiński, M. Jurkiewicz, S. Bou Habib, M. Bondyra
Abstract:
With the development of optics, electronics, and computers, vision systems are increasingly used in various areas of life, science, and industry. Vision systems have a huge number of applications. They can be used in quality control, object detection, data reading, e.g., QR-code, etc. A large part of them is used for measurement purposes. Some of them make it possible to obtain a 3D reconstruction of the tested objects or measurement areas. 3D reconstruction algorithms are mostly based on creating depth maps from data that can be acquired from active or passive methods. Due to the specific appliance in airfield technology, only passive methods are applicable because of other existing systems working on the site, which can be blinded on most spectral levels. Furthermore, reconstruction is required to work long distances ranging from hundreds of meters to tens of kilometers with low loss of accuracy even with harsh conditions such as fog, rain, or snow. In response to those requirements, HRESS (Heliport REmote Safeguard System) was developed; which main part is a rotational head with a two-camera stereovision rig gathering images around the head in 360 degrees along with stereovision 3D reconstruction and point cloud combination. The sub-pixel analysis introduced in the HRESS system makes it possible to obtain an increased distance measurement resolution and accuracy of about 3% for distances over one kilometer. Ultimately, this leads to more accurate and reliable measurement data in the form of a point cloud. Moreover, the program algorithm introduces operations enabling the filtering of erroneously collected data in the point cloud. All activities from the programming, mechanical and optical side are aimed at obtaining the most accurate 3D reconstruction of the environment in the measurement area.Keywords: airfield monitoring, artificial intelligence, stereovision, 3D reconstruction
Procedia PDF Downloads 1242212 Local Directional Encoded Derivative Binary Pattern Based Coral Image Classification Using Weighted Distance Gray Wolf Optimization Algorithm
Authors: Annalakshmi G., Sakthivel Murugan S.
Abstract:
This paper presents a local directional encoded derivative binary pattern (LDEDBP) feature extraction method that can be applied for the classification of submarine coral reef images. The classification of coral reef images using texture features is difficult due to the dissimilarities in class samples. In coral reef image classification, texture features are extracted using the proposed method called local directional encoded derivative binary pattern (LDEDBP). The proposed approach extracts the complete structural arrangement of the local region using local binary batten (LBP) and also extracts the edge information using local directional pattern (LDP) from the edge response available in a particular region, thereby achieving extra discriminative feature value. Typically the LDP extracts the edge details in all eight directions. The process of integrating edge responses along with the local binary pattern achieves a more robust texture descriptor than the other descriptors used in texture feature extraction methods. Finally, the proposed technique is applied to an extreme learning machine (ELM) method with a meta-heuristic algorithm known as weighted distance grey wolf optimizer (GWO) to optimize the input weight and biases of single-hidden-layer feed-forward neural networks (SLFN). In the empirical results, ELM-WDGWO demonstrated their better performance in terms of accuracy on all coral datasets, namely RSMAS, EILAT, EILAT2, and MLC, compared with other state-of-the-art algorithms. The proposed method achieves the highest overall classification accuracy of 94% compared to the other state of art methods.Keywords: feature extraction, local directional pattern, ELM classifier, GWO optimization
Procedia PDF Downloads 1632211 Local Availability Influences Choice of Radical Treatment for Prostate Cancer
Authors: Jemini Vyas, Oluwatobi Adeyoe, Jenny Branagan, Chandran Tanabalan, Aakash Pai
Abstract:
Introduction: Radical prostatectomy and radiotherapy are both viable options for the treatment of localised prostate cancer. Over the years medicine has evolved towards a patient-centred approach. Patient decision-making is not motivated by clinical outcomes alone. Geographical location and ease of access to treating clinician are contributory factors. With the development of robotic surgery, prostatectomy has been centralised into tertiary centres. This has impacted on the distances that patients and their families are expected to travel. Methods: A single centre retrospective study was undertaken over a five-year period. All patients with localised prostate cancer, undergoing radical radiotherapy or prostatectomy were collected pre-centralisation. This was compared to the total number undergoing these treatments post centralisation. Results: Pre-centralisation, both radiotherapy and prostatectomy groups had to travel a median of less than five miles for treatment. Post-centralisation of pelvic surgery, prostatectomy patients had to travel a median of more than 40 miles, whilst travel distance for the radiotherapy group was unchanged. In the post centralisation cohort, there was a 63% decline in the number of patients undergoing radical prostatectomy per month from a mean of 5.1 to 1.9. The radical radiotherapy group had a concurrent 41% increase in patient numbers with a mean increase from 13.3 to 18.8 patients per month. Conclusion: Choice of radical treatment in localised prostate cancer is based on multiple factors. This study infers that local availability can influence choice of radical treatment. It is imperative that efforts are made to maintain accessibility to all viable options for prostate cancer patients, so that patient choice is not compromised.Keywords: prostate, prostatectomy, radiotherapy, centralisation
Procedia PDF Downloads 962210 The Design and Implementation of an Enhanced 2D Mesh Switch
Authors: Manel Langar, Riad Bourguiba, Jaouhar Mouine
Abstract:
In this paper, we propose the design and implementation of an enhanced wormhole virtual channel on chip router. It is a heart of a mesh NoC using the XY deterministic routing algorithm. It is characterized by its simple virtual channel allocation strategy which allows reducing area and complexity of connections without affecting the performance. We implemented our router on a Tezzaron process to validate its performances. This router is a basic element that will be used later to design a 3D mesh NoC.Keywords: NoC, mesh, router, 3D NoC
Procedia PDF Downloads 5682209 Investigating the Algorithm to Maintain a Constant Speed in the Wankel Engine
Authors: Adam Majczak, Michał Bialy, Zbigniew Czyż, Zdzislaw Kaminski
Abstract:
Increasingly stringent emission standards for passenger cars require us to find alternative drives. The share of electric vehicles in the sale of new cars increases every year. However, their performance and, above all, range cannot be today successfully compared to those of cars with a traditional internal combustion engine. Battery recharging lasts hours, which can be hardly accepted due to the time needed to refill a fuel tank. Therefore, the ways to reduce the adverse features of cars equipped with electric motors only are searched for. One of the methods is a combination of an electric engine as a main source of power and a small internal combustion engine as an electricity generator. This type of drive enables an electric vehicle to achieve a radically increased range and low emissions of toxic substances. For several years, the leading automotive manufacturers like the Mazda and the Audi together with the best companies in the automotive industry, e.g., AVL have developed some electric drive systems capable of recharging themselves while driving, known as a range extender. An electricity generator is powered by a Wankel engine that has seemed to pass into history. This low weight and small engine with a rotating piston and a very low vibration level turned out to be an excellent source in such applications. Its operation as an energy source for a generator almost entirely eliminates its disadvantages like high fuel consumption, high emission of toxic substances, or short lifetime typical of its traditional application. The operation of the engine at a constant rotational speed enables a significant increase in its lifetime, and its small external dimensions enable us to make compact modules to drive even small urban cars like the Audi A1 or the Mazda 2. The algorithm to maintain a constant speed was investigated on the engine dynamometer with an eddy current brake and the necessary measuring apparatus. The research object was the Aixro XR50 rotary engine with the electronic power supply developed at the Lublin University of Technology. The load torque of the engine was altered during the research by means of the eddy current brake capable of giving any number of load cycles. The parameters recorded included speed and torque as well as a position of a throttle in an inlet system. Increasing and decreasing load did not significantly change engine speed, which means that control algorithm parameters are correctly selected. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: electric vehicle, power generator, range extender, Wankel engine
Procedia PDF Downloads 1572208 The Effect of Technology and Artifical Intelligence on Legal Securities and Privacy Issues
Authors: Kerolis Samoul Zaghloul Noaman
Abstract:
area law is the brand new access in the basket of worldwide law in the latter half of the 20 th Century. inside the last hundred and fifty years, courts and pupils advanced a consensus that, the custom is an vital supply of global law. Article 38(1) (b) of the statute of the international court of Justice identified global custom as a supply of global law. country practices and usages have a more role to play in formulating commonplace international regulation. This paper examines those country practices which may be certified to emerge as global standard law. due to the fact that, 1979 (after Moon Treaty) no hard law had been developed within the vicinity of space exploration. It attempts to link among country practices and custom in area exploration and development of standard global regulation in area activities. The paper makes use of doctrinal approach of felony research for inspecting the current questions of worldwide regulation. The paper explores exceptional worldwide prison files which include general meeting Resolutions, Treaty standards, working papers of UN, cases relating to commonplace global law and writing of jurists regarding area law and standard international law. it's far argued that, ideas such as common background of mankind, non-navy region, sovereign equality, nuclear weapon unfastened area and protection of outer area environment, etc. evolved nation practices a number of the worldwide community which can be certified to turn out to be international customary regulation.Keywords: social networks privacy issues, social networks security issues, social networks privacy precautions measures, social networks security precautions measures
Procedia PDF Downloads 202207 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series
Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold
Abstract:
To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network
Procedia PDF Downloads 1392206 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble
Authors: Jaehong Yu, Seoung Bum Kim
Abstract:
Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking
Procedia PDF Downloads 3392205 Modified Weibull Approach for Bridge Deterioration Modelling
Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight
Abstract:
State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models
Procedia PDF Downloads 7272204 Graphical Theoretical Construction of Discrete time Share Price Paths from Matroid
Authors: Min Wang, Sergey Utev
Abstract:
The lessons from the 2007-09 global financial crisis have driven scientific research, which considers the design of new methodologies and financial models in the global market. The quantum mechanics approach was introduced in the unpredictable stock market modeling. One famous quantum tool is Feynman path integral method, which was used to model insurance risk by Tamturk and Utev and adapted to formalize the path-dependent option pricing by Hao and Utev. The research is based on the path-dependent calculation method, which is motivated by the Feynman path integral method. The path calculation can be studied in two ways, one way is to label, and the other is computational. Labeling is a part of the representation of objects, and generating functions can provide many different ways of representing share price paths. In this paper, the recent works on graphical theoretical construction of individual share price path via matroid is presented. Firstly, a study is done on the knowledge of matroid, relationship between lattice path matroid and Tutte polynomials and ways to connect points in the lattice path matroid and Tutte polynomials is suggested. Secondly, It is found that a general binary tree can be validly constructed from a connected lattice path matroid rather than general lattice path matroid. Lastly, it is suggested that there is a way to represent share price paths via a general binary tree, and an algorithm is developed to construct share price paths from general binary trees. A relationship is also provided between lattice integer points and Tutte polynomials of a transversal matroid. Use this way of connection together with the algorithm, a share price path can be constructed from a given connected lattice path matroid.Keywords: combinatorial construction, graphical representation, matroid, path calculation, share price, Tutte polynomial
Procedia PDF Downloads 1382203 Enhancing Traditional Saudi Designs Pattern Cutting to Integrate Them Into Current Clothing Offers
Authors: Faizah Almalki, Simeon Gill, Steve G. Hayes, Lisa Taylor
Abstract:
A core element of cultural identity is the traditional costumes that provide insight into the heritage that has been acquired over time. This heritage is apparent in the use of colour, the styles and the functions of the clothing and it also reflects the skills of those who created the items and the time taken to produce them. Modern flat pattern drafting methods for making garment patterns are simple in comparison to the relatively laborious traditional approaches that would require personal interaction with the wearer throughout the production process. The current study reflects on the main elements of the pattern cutting system and how this has evolved in Saudi Arabia to affect the design of the Sawan garment. Analysis of the traditional methods for constructing Sawan garments was undertaken through observation of the practice and the garments and consulting documented guidance. This provided a foundation through which to explore how modern technology can be applied to improve the process. In this research, modern methods are proposed for producing traditional Saudi garments more efficiently while retaining elements of the conventional style and design. The current study has documented the vital aspects of Sawan garment style. The result showed that the method had been used to take the body measurements and pattern making was elementary and offered simple geometric shape and the Sawan garment is composed of four pieces. Consequently, this research allows for classical pattern shapes to be embedded in garments now worn in Saudi Arabia and for the continuation of cultural heritage.Keywords: traditional Sawan garment technique, modern pattern cutting technique, the shape of the garment and software, Lectra Modaris
Procedia PDF Downloads 1322202 Traits and Dilemma: Feminism and Multiple Demands in Young Chinese Female-Directed Films
Authors: Deng Qiaoshan
Abstract:
With the rise of feminism in the global film industry, feminist expressions in Chinese films have also evolved, reflecting societal focus on gender issues. This article focuses on young Chinese female directors such as Yang Lina, Teng Congcong, and Yang Mingming. Their films now present richer female perspectives and consciously incorporate unique female life experiences. They highlight women's real-life struggles, portraying ’struggling’ female identities—characters facing professional failures and desire identity issues, ultimately returning to family roles. These films commonly explore the ‘mother-daughter relationship’, with some using genre storytelling for commercial appeal and others deconstructing the ‘myth of motherhood’ to reflect reality, rewriting traditional maternal roles. The ‘struggling’ female identity in these directors' films shows an aesthetic of ‘pseudo-reality’, blending realistic situations with poetic, lyrical elements, reflecting their creative traits and internal conflicts. These contradictions are closely related to the unique creative context of Chinese cinema in which they operate. Emerging under China's strict film censorship system, film industrialization, consumerist culture, and internet environment, new-generation directors face multiple demands. How to ‘survive’ amidst complex commercial requirements while creating films with a clear feminist consciousness is the fundamental dilemma faced by young Chinese female directors.Keywords: female directors, feminism film, female dilemma, film censorship system
Procedia PDF Downloads 412201 Web Data Scraping Technology Using Term Frequency Inverse Document Frequency to Enhance the Big Data Quality on Sentiment Analysis
Authors: Sangita Pokhrel, Nalinda Somasiri, Rebecca Jeyavadhanam, Swathi Ganesan
Abstract:
Tourism is a booming industry with huge future potential for global wealth and employment. There are countless data generated over social media sites every day, creating numerous opportunities to bring more insights to decision-makers. The integration of Big Data Technology into the tourism industry will allow companies to conclude where their customers have been and what they like. This information can then be used by businesses, such as those in charge of managing visitor centers or hotels, etc., and the tourist can get a clear idea of places before visiting. The technical perspective of natural language is processed by analysing the sentiment features of online reviews from tourists, and we then supply an enhanced long short-term memory (LSTM) framework for sentiment feature extraction of travel reviews. We have constructed a web review database using a crawler and web scraping technique for experimental validation to evaluate the effectiveness of our methodology. The text form of sentences was first classified through Vader and Roberta model to get the polarity of the reviews. In this paper, we have conducted study methods for feature extraction, such as Count Vectorization and TFIDF Vectorization, and implemented Convolutional Neural Network (CNN) classifier algorithm for the sentiment analysis to decide the tourist’s attitude towards the destinations is positive, negative, or simply neutral based on the review text that they posted online. The results demonstrated that from the CNN algorithm, after pre-processing and cleaning the dataset, we received an accuracy of 96.12% for the positive and negative sentiment analysis.Keywords: counter vectorization, convolutional neural network, crawler, data technology, long short-term memory, web scraping, sentiment analysis
Procedia PDF Downloads 882200 The Professor’s Bayonet: An Educational Podcast Splicing the Literary with Social Commentary and Theology
Authors: Jason Dew
Abstract:
Podcasts are increasingly sources of intellectual content for many who desire to broaden their worldview. Topics range from sports to folklore, entertainment to spirituality. The list from which to choose is large, demonstrating the public’s interest in this medium. While traditional classrooms continue to serve the curious and upward bound, podcasts also satisfy intellectual cravings, especially for those on the go. The paper will explore how the podcast, The Professor’s Bayonet, attempts to scratch these itches by offering 4-5 minute commentaries on literary works, both classic and contemporary, through the dual lenses of current trends in society and theology. The reason for this approach is borne out of the direction many students take in exchanges of ideas. They have a sincere interest in how the books that are covered are relevant to their lives, and their questions are probing to the extent that dips into theology are helpful. Cursory examinations of whatever topic just won’t suffice. Those in Generation Z, especially, are parched for real and true answers. The paper, therefore, will share some excerpts from a selection of episodes, explaining the reasons behind why certain works were showcased. In an episode entitled “The Possibility of Evil,” for example, Shirley Jackson’s 1965 short story of the same name is explored, focusing on why the protagonist, Adela Strangeworth, leaves nasty little notes in the mailboxes of those in her small community she deems deserving of a good tongue-lashing. There is a negative result and the opportunity to make the connection to social media and how millions of individuals are guilty of the very same thing Adela Strangeworth is guilty of, making Jackson’s work somewhat prophetic. Reasons for this behavior are explored, namely what it says about how we as a society have evolved both interpersonally and spiritually.Keywords: podcast, social commentary, theology, literary
Procedia PDF Downloads 482199 Probabilistic Graphical Model for the Web
Authors: M. Nekri, A. Khelladi
Abstract:
The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.Keywords: clustering coefficient, preferential attachment, small world, web community
Procedia PDF Downloads 2722198 Neural Network Based Control Algorithm for Inhabitable Spaces Applying Emotional Domotics
Authors: Sergio A. Navarro Tuch, Martin Rogelio Bustamante Bello, Leopoldo Julian Lechuga Lopez
Abstract:
In recent years, Mexico’s population has seen a rise of different physiological and mental negative states. Two main consequences of this problematic are deficient work performance and high levels of stress generating and important impact on a person’s physical, mental and emotional health. Several approaches, such as the use of audiovisual stimulus to induce emotions and modify a person’s emotional state, can be applied in an effort to decreases these negative effects. With the use of different non-invasive physiological sensors such as EEG, luminosity and face recognition we gather information of the subject’s current emotional state. In a controlled environment, a subject is shown a series of selected images from the International Affective Picture System (IAPS) in order to induce a specific set of emotions and obtain information from the sensors. The raw data obtained is statistically analyzed in order to filter only the specific groups of information that relate to a subject’s emotions and current values of the physical variables in the controlled environment such as, luminosity, RGB light color, temperature, oxygen level and noise. Finally, a neural network based control algorithm is given the data obtained in order to feedback the system and automate the modification of the environment variables and audiovisual content shown in an effort that these changes can positively alter the subject’s emotional state. During the research, it was found that the light color was directly related to the type of impact generated by the audiovisual content on the subject’s emotional state. Red illumination increased the impact of violent images and green illumination along with relaxing images decreased the subject’s levels of anxiety. Specific differences between men and women were found as to which type of images generated a greater impact in either gender. The population sample was mainly constituted by college students whose data analysis showed a decreased sensibility to violence towards humans. Despite the early stage of the control algorithm, the results obtained from the population sample give us a better insight into the possibilities of emotional domotics and the applications that can be created towards the improvement of performance in people’s lives. The objective of this research is to create a positive impact with the application of technology to everyday activities; nonetheless, an ethical problem arises since this can also be applied to control a person’s emotions and shift their decision making.Keywords: data analysis, emotional domotics, performance improvement, neural network
Procedia PDF Downloads 1402197 Show Products or Show Endorsers: Immersive Visual Experience in Fashion Advertisements on Instagram
Authors: H. Haryati, A. Nor Azura
Abstract:
Over the turn of the century, the advertising landscape has evolved significantly, from print media to digital media. In line with the shift to the advanced science and technology dramatically shake the framework of societies Fifth Industrial Revolution (IR5.0), technological endeavors have increased exponentially, which influenced user interaction more inspiring through online advertising that intentionally leads to buying behavior. Users are more accustomed to interactive content that responds to their actions. Thus, immersive experience has transformed into a new engagement experience To centennials. The purpose of this paper is to investigate pleasure and arousal as the fundamental elements of consumer emotions and affective responses to marketing stimuli. A quasi-experiment procedure will be adopted in the research involving 40 undergraduate students in Nilai, Malaysia. This study employed a 2 (celebrity endorser vs. Social media influencer) X 2 (high and low visual complexity) factorial between-subjects design. Participants will be exposed to a printed version depicting a fashion product endorsed by a celebrity and social media influencers, presented in high and low levels of visual complexity. While the questionnaire will be Distributing during the lab test session is used to control their honesty, real feedback, and responses through the latest Instagram design and engagement. Therefore, the research aims to define the immersive experience on Instagram and the interaction between pleasure and arousal. An advertisement that evokes pleasure and arousal will be likely getting more attention from the target audience. This is one of the few studies comparing the endorses in Instagram advertising. Also, this research extends the existing knowledge about the immersive visual complexity in the context of social media advertising.Keywords: immersive visual experience, instagram, pleasure, arousal
Procedia PDF Downloads 1822196 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression
Authors: Anne M. Denton, Rahul Gomes, David W. Franzen
Abstract:
High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression
Procedia PDF Downloads 1292195 Study of Efficiency of Flying Animal Using Computational Simulation
Authors: Ratih Julistina, M. Agoes Moelyadi
Abstract:
Innovation in aviation technology evolved rapidly by time to time for acquiring the most favorable value of utilization and is usually denoted by efficiency parameter. Nature always become part of inspiration, and for this sector, many researchers focused on studying the behavior of flying animal to comprehend the fundamental, one of them is birds. Experimental testing has already conducted by several researches to seek and calculate the efficiency by putting the object in wind tunnel. Hence, computational simulation is needed to conform the result and give more visualization which is based on Reynold Averaged Navier-Stokes equation solution for unsteady case in time-dependent viscous flow. By creating model from simplification of the real bird as a rigid body, those are Hawk which has low aspect ratio and Swift with high aspect ratio, subsequently generating the multi grid structured mesh to capture and calculate the aerodynamic behavior and characteristics. Mimicking the motion of downstroke and upstroke of bird flight which produced both lift and thrust, the sinusoidal function is used. Simulation is carried out for varied of flapping frequencies within upper and lower range of actual each bird’s frequency which are 1 Hz, 2.87 Hz, 5 Hz for Hawk and 5 Hz, 8.9 Hz, 13 Hz for Swift to investigate the dependency of frequency effecting the efficiency of aerodynamic characteristics production. Also, by comparing the result in different condition flights with the morphology of each bird. Simulation has shown that higher flapping frequency is used then greater aerodynamic coefficient is obtained, on other hand, efficiency on thrust production is not the same. The result is analyzed from velocity and pressure contours, mesh movement as to see the behavior.Keywords: characteristics of aerodynamic, efficiency, flapping frequency, flapping wing, unsteady simulation
Procedia PDF Downloads 2452194 Social Networks Global Impact on Protest Movements and Human Rights Activism
Authors: Marcya Burden, Savonna Greer
Abstract:
In the wake of social unrest around the world, protest movements have been captured like never before. As protest movements have evolved, so too have their visibility and sources of coverage. Long gone are the days of print media as our only glimpse into the action surrounding a protest. Now, with social networks such as Facebook, Instagram and Snapchat, we have access to real-time video footage of protest movements and human rights activism that can reach millions of people within seconds. This research paper investigated various social media network platforms’ statistical usage data in the areas of human rights activism and protest movements, paralleling with other past forms of media coverage. This research demonstrates that social networks are extremely important to protest movements and human rights activism. With over 2.9 billion users across social media networks globally, these platforms are the heart of most recent protests and human rights activism. This research shows the paradigm shift from the Selma March of 1965 to the more recent protests of Ferguson in 2014, Ni Una Menos in 2015, and End Sars in 2018. The research findings demonstrate that today, almost anyone may use their social networks to protest movement leaders and human rights activists. From a student to an 80-year-old professor, the possibility of reaching billions of people all over the world is limitless. Findings show that 82% of the world’s internet population is on social networks 1 in every 5 minutes. Over 65% of Americans believe social media highlights important issues. Thus, there is no need to have a formalized group of people or even be known online. A person simply needs to be engaged on their respective social media networks (Facebook, Twitter, Instagram, Snapchat) regarding any cause they are passionate about. Information may be exchanged in real time around the world and a successful protest can begin.Keywords: activism, protests, human rights, networks
Procedia PDF Downloads 952193 Applying Multiplicative Weight Update to Skin Cancer Classifiers
Authors: Animish Jain
Abstract:
This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer
Procedia PDF Downloads 792192 Managing Pseudoangiomatous Stromal Hyperplasia Appropriately and Safely: A Retrospective Case Series Review
Authors: C. M. Williams, R. English, P. King, I. M. Brown
Abstract:
Introduction: Pseudoangiomatous Stromal Hyperplasia (PASH) is a benign fibrous proliferation of breast stroma affecting predominantly premenopausal women with no significant increased risk of breast cancer. Informal recommendations for management have continued to evolve over recent years from surgical excision to observation, although there are no specific national guidelines. This study assesses the safety of a non-surgical approach to PASH management by review of cases at a single centre. Methods: Retrospective case series review (January 2011 – August 2016) was conducted on consecutive PASH cases. Diagnostic classification (clinical, radiological and histological), management outcomes, and breast cancer incidence were recorded. Results: 43 patients were followed up for median of 25 months (3-64) with 75% symptomatic at presentation. 12% of cases (n=5) had a radiological score (BIRADS MMG or US) ≥ 4 of which 3 were confirmed malignant. One further malignancy was detected and proven radiologically occult and contralateral. No patients were diagnosed with a malignancy during follow-up. Treatment evolved from 67% surgical in 2011 to 33% in 2016. Conclusions: The management of PASH has transitioned in line with other published experience. The preliminary findings suggest this appears safe with no evidence of missed malignancies; however, longer follow up is required to confirm long-term safety. Recommendations: PASH with suspicious radiological findings ( ≥ U4/R4) warrants multidisciplinary discussion for excision. In the absence of histological or radiological suspicion of malignancy, PASH can be safely managed without surgery.Keywords: benign breast disease, conservative management, malignancy, pseudoangiomatous stromal hyperplasia, surgical excision
Procedia PDF Downloads 1322191 Development of Wave-Dissipating Block Installation Simulation for Inexperienced Worker Training
Authors: Hao Min Chuah, Tatsuya Yamazaki, Ryosui Iwasawa, Tatsumi Suto
Abstract:
In recent years, with the advancement of digital technology, the movement to introduce so-called ICT (Information and Communication Technology), such as computer technology and network technology, to civil engineering construction sites and construction sites is accelerating. As part of this movement, attempts are being made in various situations to reproduce actual sites inside computers and use them for designing and construction planning, as well as for training inexperienced engineers. The installation of wave-dissipating blocks on coasts, etc., is a type of work that has been carried out by skilled workers based on their years of experience and is one of the tasks that is difficult for inexperienced workers to carry out on site. Wave-dissipating blocks are structures that are designed to protect coasts, beaches, and so on from erosion by reducing the energy of ocean waves. Wave-dissipating blocks usually weigh more than 1 t and are installed by being suspended by a crane, so it would be time-consuming and costly for inexperienced workers to train on-site. In this paper, therefore, a block installation simulator is developed based on Unity 3D, a game development engine. The simulator computes porosity. Porosity is defined as the ratio of the total volume of the wave breaker blocks inside the structure to the final shape of the ideal structure. Using the evaluation of porosity, the simulator can determine how well the user is able to install the blocks. The voxelization technique is used to calculate the porosity of the structure, simplifying the calculations. Other techniques, such as raycasting and box overlapping, are employed for accurate simulation. In the near future, the simulator will install an automatic block installation algorithm based on combinatorial optimization solutions and compare the user-demonstrated block installation and the appropriate installation solved by the algorithm.Keywords: 3D simulator, porosity, user interface, voxelization, wave-dissipating blocks
Procedia PDF Downloads 1032190 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals
Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar
Abstract:
Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks
Procedia PDF Downloads 1862189 Surgical Management of Distal Femur Fracture Using Locking Compression Plate: Our Experience in a Rural Tertiary Care Centre in India
Authors: Pagadaplly Girish, P. V. Manohar
Abstract:
Introduction: Management of distal femur fractures is challenging. Recently, treatment has evolved towards indirect reduction and minimally invasive techniques. Objectives: To assess the fracture union and functional outcome following open reduction and internal fixation of distal femur fractures with locking compression plate and to achieve restoration of the anatomical alignment of fracture fragments and stable internal fixation. Methodology: Patients with distal femur fracture treated by locking compression during Oct 2011 to April 2013 were assessed prospectively. Patients below 18 years and those with neuro-vascular deficits were excluded. Age, sex of the patient, type of fracture, mechanism of injury, type of implant used, operative time and postoperative complications were analysed. The Neer’s scale was used to assess the outcome of the patients. Results: The total number of patients was 30; 28 males and 2 females; mean age was 41.53 years. Road traffic accidents were the major causes of injury followed by falls. The average duration of hospital stay was 21.3 days. The overall complication rate note was 23.33%. The mean range of movement around the knee joint after 6 months of follow-up was 114.330. The average time for the radiological union was 14 weeks. Excellent to good results were noted in 26 patients (86.6%) and average to poor results were observed in 4 (13.33%) patients. Conclusions: The locking compression plate gives a rigid fixation for the fracture. It also provides a good purchase in osteoporotic bones. LCP is simple and a reliable implant appropriate for fixation of femoral fractures with promising results.Keywords: distal femur fractures, locking compression plate, Neer’s criteria, neuro-vascular deficits
Procedia PDF Downloads 2502188 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity
Authors: Yuri Laevsky, Tatyana Nosova
Abstract:
The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation
Procedia PDF Downloads 3012187 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach
Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao
Abstract:
Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search
Procedia PDF Downloads 782186 A Tomb Structure in Pursuit of Tradition in 2oth Century Turkey and Its Story; the Tomb of Haci Hâkim Kemal Onsun and His Wife
Authors: Yavuz Arat, Ugur Tuztasi, Mehmet Uysal
Abstract:
Anatolia has been the host of many civilizations and a site where architectural structures of many cultural layers were interpreted. Most significantly the Turks who settled in Central Asia brought their architectural dynamics and cultural accumulation to Anatolia after the 12th century. The tomb structures first observed in Central Asia under the influence of Islamic faith and Turkish cultural heritage has blossomed under Great Seljuk Empire and with the Anatolian Seljuk Empire these tombs changed both in size and form with rich and beautiful samples from Ahlat to Sivas to Kayseri and Konya. This tomb tradition which started during 13th century has continued during the Ottoman Empire period with some alterations of form and evolved into the rarely observed mausoleum type tombs. The Ottoman tradition of building tombs inside mosque gardens and their forms present the clues of an important burial tradition. However this understanding was abandoned in 20th century Turkey. This tradition was abandoned with regard to legal regulations and health conditions. This study investigates the vestiges of this tradition and its spatial reflections over a sample. The present sample is representative of a tradition that started in 1970s and the case of building tombs inside mosque gardens will be illustrated over the tomb of Hacı Kemal Onsun and his wife which is located in Konya, the capital of the Anatolian Seljuks. The building process of this tomb will be evaluated with regard to burial traditions and architectural stylization.Keywords: tomb, language of architectural form, Anatolian Seljuk tombs, Ottoman tombs
Procedia PDF Downloads 404