Search results for: gold mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1596

Search results for: gold mining

336 Spreading Japan's National Image through China during the Era of Mass Tourism: The Japan National Tourism Organization’s Use of Sina Weibo

Authors: Abigail Qian Zhou

Abstract:

Since China has entered an era of mass tourism, there has been a fundamental change in the way Chinese people approach and perceive the image of other countries. With the advent of the new media era, social networking sites such as Sina Weibo have become a tool for many foreign governmental organizations to spread and promote their national image. Among them, the Japan National Tourism Organization (JNTO) was one of the first foreign official tourism agencies to register with Sina Weibo and actively implement communication activities. Due to historical and political reasons, cognition of Japan's national image by the Chinese has always been complicated and contradictory. However, since 2015, China has become the largest source of tourists visiting Japan. This clearly indicates that the broadening of Japan's national image in China has been effective and has value worthy of reference in promoting a positive Chinese perception of Japan and encouraging Japanese tourism. Within this context and using the method of content analysis in media studies through content mining software, this study analyzed how JNTO’s Sina Weibo accounts have constructed and spread Japan's national image. This study also summarized the characteristics of its content and form, and finally revealed the strategy of JNTO in building its international image. The findings of this study not only add a tourism-based perspective to traditional national image communications research, but also provide some reference for the effective international dissemination of national image in the future.

Keywords: national image, international communication, tourism, Japan, China

Procedia PDF Downloads 122
335 Relationship between the Ability of Accruals and Non-Systematic Risk of Shares for Companies Listed in Stock Exchange: Case Study, Tehran

Authors: Lina Najafian, Hamidreza Vakilifard

Abstract:

The present study focused on the relationship between the quality of accruals and non-systematic risk. The independent study variables included the ability of accruals, the information content of accruals, and amount of discretionary accruals considered as accruals quality measures. The dependent variable was non-systematic risk based on the Fama and French Three Factor model (FFTFM) and the capital asset pricing model (CAPM). The control variables were firm size, financial leverage, stock return, cash flow fluctuations, and book-to-market ratio. The data collection method was based on library research and document mining including financial statements. Multiple regression analysis was used to analyze the data. The study results showed that there is a significant direct relationship between financial leverage and discretionary accruals and non-systematic risk based on FFTFM and CAPM. There is also a significant direct relationship between the ability of accruals, information content of accruals, firm size, and stock return and non-systematic based on both models. It was also found that there is no relationship between book-to-market ratio and cash flow fluctuations and non-systematic risk.

Keywords: accruals quality, non-systematic risk, CAPM, FFTFM

Procedia PDF Downloads 158
334 Characterization of Particle Charge from Aerosol Generation Process: Impact on Infrared Signatures and Material Reactivity

Authors: Erin M. Durke, Monica L. McEntee, Meilu He, Suresh Dhaniyala

Abstract:

Aerosols are one of the most important and significant surfaces in the atmosphere. They can influence weather, absorption, and reflection of light, and reactivity of atmospheric constituents. A notable feature of aerosol particles is the presence of a surface charge, a characteristic imparted via the aerosolization process. The existence of charge can complicate the interrogation of aerosol particles, so many researchers remove or neutralize aerosol particles before characterization. However, the charge is present in real-world samples, and likely has an effect on the physical and chemical properties of an aerosolized material. In our studies, we aerosolized different materials in an attempt to characterize the charge imparted via the aerosolization process and determine what impact it has on the aerosolized materials’ properties. The metal oxides, TiO₂ and SiO₂, were aerosolized expulsively and then characterized, using several different techniques, in an effort to determine the surface charge imparted upon the particles via the aerosolization process. Particle charge distribution measurements were conducted via the employment of a custom scanning mobility particle sizer. The results of the charge distribution measurements indicated that expulsive generation of 0.2 µm SiO₂ particles produced aerosols with upwards of 30+ charges on the surface of the particle. Determination of the degree of surface charging led to the use of non-traditional techniques to explore the impact of additional surface charge on the overall reactivity of the metal oxides, specifically TiO₂. TiO₂ was aerosolized, again expulsively, onto a gold-coated tungsten mesh, which was then evaluated with transmission infrared spectroscopy in an ultra-high vacuum environment. The TiO₂ aerosols were exposed to O₂, H₂, and CO, respectively. Exposure to O₂ resulted in a decrease in the overall baseline of the aerosol spectrum, suggesting O₂ removed some of the surface charge imparted during aerosolization. Upon exposure to H₂, there was no observable rise in the baseline of the IR spectrum, as is typically seen for TiO₂, due to the population of electrons into the shallow trapped states and subsequent promotion of the electrons into the conduction band. This result suggests that the additional charge imparted via aerosolization fills the trapped states, therefore no rise is seen upon exposure to H₂. Dosing the TiO₂ aerosols with CO showed no adsorption of CO on the surface, even at lower temperatures (~100 K), indicating the additional charge on the aerosol surface prevents the CO molecules from adsorbing to the TiO₂ surface. The results observed during exposure suggest that the additional charge imparted via aerosolization impacts the interaction with each probe gas.

Keywords: aerosols, charge, reactivity, infrared

Procedia PDF Downloads 120
333 Ecotourism Sites in Central Visayas, Philippines: A Green Business Profile

Authors: Ivy Jumao-As, Randy Lupango, Clifford Villaflores, Marites Khanser

Abstract:

Alongside inadequate implementation of ecotourism standards and other pressing issues on sustainable development is the lack of business plans and formal business structures of various ecotourism sites in the Central Visayas, Philippines, and other parts of the country. Addressing these issues plays a key role to boost ecotourism which is a sustainability tool to the country’s economic development. A three-phase research is designed to investigate the green business practices of selected ecotourism sites in the region in order to propose a business model for ecotourism destinations in the region and outside. This paper reports the initial phase of the study which described the sites’ profile as well as operators of the following selected destinations: Cebu City Protected Landscape and Olango Island Wildlife Bird Sanctuary in Cebu, Rajah Sikatuna Protected Landscape in Bohol. Interview, Self-Administered Questionnaire with key informants and Data Mining were employed in the data collection. Findings highlighted similarities and differences in terms of eco-tourism products, type and number of visitors, manpower composition, cultural and natural resources, complementary services and products, awards and accreditation, peak and off peak seasons, among others. Recommendations based from common issues initially identified in this study are also highlighted.

Keywords: ecotourism, ecotourism sites, green business, sustainability

Procedia PDF Downloads 263
332 The Role of Dynamic Ankle Foot Orthosis on Temporo-Spatial Parameters of Gait and Balance in Patients with Hereditary Spastic Paraparesis: Six-Months Follow Up

Authors: Suat Erel, Gozde Gur

Abstract:

Background: Recently a supramalleolar type of dynamic ankle foot orthosis (DAFO) has been increasingly used to support all of the dynamic arches of the foot and redistribute the pressure under the plantar surface of the foot to reduce the muscle tone. DAFO helps to maintain balance and postural control by providing stability and proprioceptive feedback in children with disease like Cerebral Palsy, Muscular Dystrophies, Down syndrome, and congenital hypotonia. Aim: The aim of this study was to investigate the role of Dynamic ankle foot orthosis (DAFO) on temporo-spatial parameters of gait and balance in three children with hereditary spastic paraparesis (HSP). Material Method: 13, 14, and 8 years old three children with HSP were included in the study. To provide correction on weight bearing and to improve gait, DAFO was made. Lower extremity spasticity (including gastocnemius, hamstrings and hip adductor muscles) using modified Ashworth Scale (MAS) (0-5), The temporo-spatial gait parameters (walking speed, cadence, base of support, step length) and Timed Up & Go test (TUG) were evaluated. All of the assessments about gait were compared with (with DAFO and shoes) and without DAFO (with shoes only) situations. Also after six months follow up period, assessments were repeated by the same physical therapist. Results: MAS scores for lower extremity were between “2-3” for the first child, “0-2” for the second child and “1-2” for the third child. TUG scores (sec) decreased from 20.2 to 18 for case one, from 9.4 to 9 for case two and from 12,4 to 12 for case three in the condition with shoes only and also from 15,2 to 14 for case one, from 7,2 to 7,1 for case two and from 10 to 7,3 for case three in the condition with DAFO and shoes. Gait speed (m/sec) while wearing shoes only was similar but while wearing DAFO and shoes increased from 0,4 to 0,5 for case one, from 1,5 to 1,6 for case two and from 1,0 to 1,2 for case three. Base of support scores (cm) wearing shoes only decreased from 18,5 to 14 for case one, from 13 to 12 for case three and were similar as 11 for case two. While wearing DAFO and shoes, base of support decreased from 10 to 9 for case one, from 11,5 to 10 for case three and was similar as 8 for case two. Conclusion: The use of a DAFO in a patient with HSP normalized the temporo-spatial gait parameters and improved balance. Walking speed is a gold standard for evaluating gait quality. With the use of DAFO, walking speed increased in this three children with HSP. With DAFO, better TUG scores shows that functional ambulation improved. Reduction in base of support and more symmetrical step lengths with DAFO indicated better balance. These encouraging results warrant further study on wider series.

Keywords: dynamic ankle foot orthosis, gait, hereditary spastic paraparesis, balance in patient

Procedia PDF Downloads 351
331 Nanoliposomes in Photothermal Therapy: Advancements and Applications

Authors: Mehrnaz Mostafavi

Abstract:

Nanoliposomes, minute lipid-based vesicles at the nano-scale, show promise in the realm of photothermal therapy (PTT). This study presents an extensive overview of nanoliposomes in PTT, exploring their distinct attributes and the significant progress in this therapeutic methodology. The research delves into the fundamental traits of nanoliposomes, emphasizing their adaptability, compatibility with biological systems, and their capacity to encapsulate diverse therapeutic substances. Specifically, it examines the integration of light-absorbing materials, like gold nanoparticles or organic dyes, into nanoliposomal formulations, enabling their efficacy as proficient agents for photothermal treatment Additionally, this paper elucidates the mechanisms involved in nanoliposome-mediated PTT, highlighting their capability to convert light energy into localized heat, facilitating the precise targeting of diseased cells or tissues. This precise regulation of light absorption and heat generation by nanoliposomes presents a non-invasive and precisely focused therapeutic approach, particularly in conditions like cancer. The study explores advancements in nanoliposomal formulations aimed at optimizing PTT outcomes. These advancements include strategies for improved stability, enhanced drug loading, and the targeted delivery of therapeutic agents to specific cells or tissues. Furthermore, the paper discusses multifunctional nanoliposomal systems, integrating imaging components or targeting elements for real-time monitoring and improved accuracy in PTT. Moreover, the review highlights recent preclinical and clinical trials showcasing the effectiveness and safety of nanoliposome-based PTT across various disease models. It also addresses challenges in clinical implementation, such as scalability, regulatory considerations, and long-term safety assessments. In conclusion, this paper underscores the substantial potential of nanoliposomes in advancing PTT as a promising therapeutic approach. Their distinctive characteristics, combined with their precise ability to convert light into heat, offer a tailored and efficient method for treating targeted diseases. The encouraging outcomes from preclinical studies pave the way for further exploration and potential clinical applications of nanoliposome-based PTT.

Keywords: nanoliposomes, photothermal therapy, light absorption, heat conversion, therapeutic agents, targeted delivery, cancer therapy

Procedia PDF Downloads 100
330 An Investigation of Sentiment and Themes from Twitter for Brexit in 2016

Authors: Anas Alsuhaibani

Abstract:

Observing debate and discussion over social media has been found to be a promising tool to investigate different types of opinion. On 23 June 2016, Brexit voters in the UK decided to depart from the EU, with 51.9% voting to leave. On Twitter, there had been a massive debate in this context, and the hashtag Brexit was allocated as number six of the most tweeted hashtags across the globe in 2016. The study aimed to investigate the sentiment and themes expressed in a sample of tweets during a political event (Brexit) in 2016. A sentiment and thematic analysis was conducted on 1304 randomly selected tweets tagged with the hashtag Brexit in Twitter for the period from 10 June 2016 to 7 July 2016. The data were coded manually into two code frames, sentiment and thematic, and the reliability of coding was assessed for both codes. The sentiment analysis of the selected sample found that 45.63% of tweets conveyed negative emotions while there were only 10.43% conveyed positive emotions. It also surprisingly resulted that 29.37% were factual tweets, where the tweeter expressed no sentiment and the tweet conveyed a fact. For the thematic analysis, the economic theme dominated by 23.41%, and almost half of its discussion was related to business within the UK and the UK and global stock markets. The study reported that the current UK government and relation to campaign themes were the most negative themes. Both sentiment and thematic analyses found that tweets with more than one opinion or theme were rare, 8.29% and 6.13%, respectively.

Keywords: Brexit, political opinion mining, social media, twitter

Procedia PDF Downloads 208
329 Charting Sentiments with Naive Bayes and Logistic Regression

Authors: Jummalla Aashrith, N. L. Shiva Sai, K. Bhavya Sri

Abstract:

The swift progress of web technology has not only amassed a vast reservoir of internet data but also triggered a substantial surge in data generation. The internet has metamorphosed into one of the dynamic hubs for online education, idea dissemination, as well as opinion-sharing. Notably, the widely utilized social networking platform Twitter is experiencing considerable expansion, providing users with the ability to share viewpoints, participate in discussions spanning diverse communities, and broadcast messages on a global scale. The upswing in online engagement has sparked a significant curiosity in subjective analysis, particularly when it comes to Twitter data. This research is committed to delving into sentiment analysis, focusing specifically on the realm of Twitter. It aims to offer valuable insights into deciphering information within tweets, where opinions manifest in a highly unstructured and diverse manner, spanning a spectrum from positivity to negativity, occasionally punctuated by neutrality expressions. Within this document, we offer a comprehensive exploration and comparative assessment of modern approaches to opinion mining. Employing a range of machine learning algorithms such as Naive Bayes and Logistic Regression, our investigation plunges into the domain of Twitter data streams. We delve into overarching challenges and applications inherent in the realm of subjectivity analysis over Twitter.

Keywords: machine learning, sentiment analysis, visualisation, python

Procedia PDF Downloads 47
328 Ribotaxa: Combined Approaches for Taxonomic Resolution Down to the Species Level from Metagenomics Data Revealing Novelties

Authors: Oshma Chakoory, Sophie Comtet-Marre, Pierre Peyret

Abstract:

Metagenomic classifiers are widely used for the taxonomic profiling of metagenomic data and estimation of taxa relative abundance. Small subunit rRNA genes are nowadays a gold standard for the phylogenetic resolution of complex microbial communities, although the power of this marker comes down to its use as full-length. We benchmarked the performance and accuracy of rRNA-specialized versus general-purpose read mappers, reference-targeted assemblers and taxonomic classifiers. We then built a pipeline called RiboTaxa to generate a highly sensitive and specific metataxonomic approach. Using metagenomics data, RiboTaxa gave the best results compared to other tools (Kraken2, Centrifuge (1), METAXA2 (2), PhyloFlash (3)) with precise taxonomic identification and relative abundance description, giving no false positive detection. Using real datasets from various environments (ocean, soil, human gut) and from different approaches (metagenomics and gene capture by hybridization), RiboTaxa revealed microbial novelties not seen by current bioinformatics analysis opening new biological perspectives in human and environmental health. In a study focused on corals’ health involving 20 metagenomic samples (4), an affiliation of prokaryotes was limited to the family level with Endozoicomonadaceae characterising healthy octocoral tissue. RiboTaxa highlighted 2 species of uncultured Endozoicomonas which were dominant in the healthy tissue. Both species belonged to a genus not yet described, opening new research perspectives on corals’ health. Applied to metagenomics data from a study on human gut and extreme longevity (5), RiboTaxa detected the presence of an uncultured archaeon in semi-supercentenarians (aged 105 to 109 years) highlighting an archaeal genus, not yet described, and 3 uncultured species belonging to the Enorma genus that could be species of interest participating in the longevity process. RiboTaxa is user-friendly, rapid, allowing microbiota structure description from any environment and the results can be easily interpreted. This software is freely available at https://github.com/oschakoory/RiboTaxa under the GNU Affero General Public License 3.0.

Keywords: metagenomics profiling, microbial diversity, SSU rRNA genes, full-length phylogenetic marker

Procedia PDF Downloads 113
327 Parkinson’s Disease Detection Analysis through Machine Learning Approaches

Authors: Muhtasim Shafi Kader, Fizar Ahmed, Annesha Acharjee

Abstract:

Machine learning and data mining are crucial in health care, as well as medical information and detection. Machine learning approaches are now being utilized to improve awareness of a variety of critical health issues, including diabetes detection, neuron cell tumor diagnosis, COVID 19 identification, and so on. Parkinson’s disease is basically a disease for our senior citizens in Bangladesh. Parkinson's Disease indications often seem progressive and get worst with time. People got affected trouble walking and communicating with the condition advances. Patients can also have psychological and social vagaries, nap problems, hopelessness, reminiscence loss, and weariness. Parkinson's disease can happen in both men and women. Though men are affected by the illness at a proportion that is around partial of them are women. In this research, we have to get out the accurate ML algorithm to find out the disease with a predictable dataset and the model of the following machine learning classifiers. Therefore, nine ML classifiers are secondhand to portion study to use machine learning approaches like as follows, Naive Bayes, Adaptive Boosting, Bagging Classifier, Decision Tree Classifier, Random Forest classifier, XBG Classifier, K Nearest Neighbor Classifier, Support Vector Machine Classifier, and Gradient Boosting Classifier are used.

Keywords: naive bayes, adaptive boosting, bagging classifier, decision tree classifier, random forest classifier, XBG classifier, k nearest neighbor classifier, support vector classifier, gradient boosting classifier

Procedia PDF Downloads 124
326 Weighted-Distance Sliding Windows and Cooccurrence Graphs for Supporting Entity-Relationship Discovery in Unstructured Text

Authors: Paolo Fantozzi, Luigi Laura, Umberto Nanni

Abstract:

The problem of Entity relation discovery in structured data, a well covered topic in literature, consists in searching within unstructured sources (typically, text) in order to find connections among entities. These can be a whole dictionary, or a specific collection of named items. In many cases machine learning and/or text mining techniques are used for this goal. These approaches might be unfeasible in computationally challenging problems, such as processing massive data streams. A faster approach consists in collecting the cooccurrences of any two words (entities) in order to create a graph of relations - a cooccurrence graph. Indeed each cooccurrence highlights some grade of semantic correlation between the words because it is more common to have related words close each other than having them in the opposite sides of the text. Some authors have used sliding windows for such problem: they count all the occurrences within a sliding windows running over the whole text. In this paper we generalise such technique, coming up to a Weighted-Distance Sliding Window, where each occurrence of two named items within the window is accounted with a weight depending on the distance between items: a closer distance implies a stronger evidence of a relationship. We develop an experiment in order to support this intuition, by applying this technique to a data set consisting in the text of the Bible, split into verses.

Keywords: cooccurrence graph, entity relation graph, unstructured text, weighted distance

Procedia PDF Downloads 146
325 A Dynamic Solution Approach for Heart Disease Prediction

Authors: Walid Moudani

Abstract:

The healthcare environment is generally perceived as being information rich yet knowledge poor. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. In fact, valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, a proficient methodology for the extraction of significant patterns from the coronary heart disease warehouses for heart attack prediction, which unfortunately continues to be a leading cause of mortality in the whole world, has been presented. For this purpose, we propose to enumerate dynamically the optimal subsets of the reduced features of high interest by using rough sets technique associated to dynamic programming. Therefore, we propose to validate the classification using Random Forest (RF) decision tree to identify the risky heart disease cases. This work is based on a large amount of data collected from several clinical institutions based on the medical profile of patient. Moreover, the experts’ knowledge in this field has been taken into consideration in order to define the disease, its risk factors, and to establish significant knowledge relationships among the medical factors. A computer-aided system is developed for this purpose based on a population of 525 adults. The performance of the proposed model is analyzed and evaluated based on set of benchmark techniques applied in this classification problem.

Keywords: multi-classifier decisions tree, features reduction, dynamic programming, rough sets

Procedia PDF Downloads 406
324 Closed Mitral Valvotomy: A Safe and Promising Procedure

Authors: Sushil Kumar Singh, Kumar Rahul, Vivek Tewarson, Sarvesh Kumar, Shobhit Kumar

Abstract:

Objective: Rheumatic mitral stenosis continues to be a major public health problem in developing countries. When the left atrium (LA) is unable to fill the left ventricle (LV) at normal LA pressures due to impaired relaxation and impaired compliance, diastolic dysfunction occurs. The assessment of left ventricular (LV) diastolic function and filling pressures is of clinical importance to identify underlying cardiac disease, its treatment, and to assess prognosis. 2D echocardiography can detect diastolic dysfunction with excellent sensitivity and minimal risk when compared to the gold standard of invasive pressure-volume measurements. Material and Method: This was a one-year study consisting of twenty-nine patients of isolated rheumatic severe mitral stenosis. Data was analyzed preoperative and post operative (at one month follow-up). Transthoracic 2D echocardiographic parameters of the diastolic function are transmitral flow, pulmonary venous flow, mitral annular tissue doppler, and color M-mode doppler. In our study, mitral valve orifice area, ejection fraction, deceleration time, E/A-wave, E/E’-wave, myocardial performance index of left ventricle (Tei index ), and Mitral inflow propagation velocity were included for echocardiographic evaluation. The statistical analysis was performed on SPSS Version 15.0 statistical analysis software. Result: Twenty-nine patients underwent successful closed mitral commissurotomy for isolated mitral stenosis. The outcome measures were observed pre-operatively and at one-month follow-up. The majority of patients were in NYHA grade III (69.0%) in the preoperative period, which improved to NYHA grade I (48.3%) after closed mitral commissurotomy. Post-surgery mitral valve area increased from 0.77 ± 0.13 to 2.32 ± 0.26 cm, ejection fraction increased from 61.38 ± 4.61 to 64.79 ± 3.22. There was a decrease in deceleration time from 231.55 ± 49.31 to 168.28 ± 14.30 ms, E/A ratio from 1.70 ± 0.54 from 0.89 ± 0.39, E/E’ ratio from 14.59 ± 3.34 to 8.86 ± 3.03. In addition, there was improvement in TIE index from 0.50 ± 0.03 to 0.39 ± 0.06 and mitral inflow propagation velocity from 47.28 ± 3.71 to 57.86 ± 3.19 cm/sec. In peri-operative and follow-up, there was no incidence of severe mitral regurgitation (MR). There was no thromboembolic incident and no mortality.

Keywords: closed mitral valvotomy, mitral stenosis, open mitral commissurotomy, balloon mitral valvotomy

Procedia PDF Downloads 83
323 Multi-Level Air Quality Classification in China Using Information Gain and Support Vector Machine

Authors: Bingchun Liu, Pei-Chann Chang, Natasha Huang, Dun Li

Abstract:

Machine Learning and Data Mining are the two important tools for extracting useful information and knowledge from large datasets. In machine learning, classification is a wildly used technique to predict qualitative variables and is generally preferred over regression from an operational point of view. Due to the enormous increase in air pollution in various countries especially China, Air Quality Classification has become one of the most important topics in air quality research and modelling. This study aims at introducing a hybrid classification model based on information theory and Support Vector Machine (SVM) using the air quality data of four cities in China namely Beijing, Guangzhou, Shanghai and Tianjin from Jan 1, 2014 to April 30, 2016. China's Ministry of Environmental Protection has classified the daily air quality into 6 levels namely Serious Pollution, Severe Pollution, Moderate Pollution, Light Pollution, Good and Excellent based on their respective Air Quality Index (AQI) values. Using the information theory, information gain (IG) is calculated and feature selection is done for both categorical features and continuous numeric features. Then SVM Machine Learning algorithm is implemented on the selected features with cross-validation. The final evaluation reveals that the IG and SVM hybrid model performs better than SVM (alone), Artificial Neural Network (ANN) and K-Nearest Neighbours (KNN) models in terms of accuracy as well as complexity.

Keywords: machine learning, air quality classification, air quality index, information gain, support vector machine, cross-validation

Procedia PDF Downloads 232
322 Breast Cancer Survivability Prediction via Classifier Ensemble

Authors: Mohamed Al-Badrashiny, Abdelghani Bellaachia

Abstract:

This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the underlying classifiers and Na¨ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set.

Keywords: classifier ensemble, breast cancer survivability, data mining, SEER

Procedia PDF Downloads 320
321 Calibration of the Discrete Element Method Using a Large Shear Box

Authors: C. J. Coetzee, E. Horn

Abstract:

One of the main challenges in using the Discrete Element Method (DEM) is to specify the correct input parameter values. In general, the models are sensitive to the input parameter values and accurate results can only be achieved if the correct values are specified. For the linear contact model, micro-parameters such as the particle density, stiffness, coefficient of friction, as well as the particle size and shape distributions are required. There is a need for a procedure to accurately calibrate these parameters before any attempt can be made to accurately model a complete bulk materials handling system. Since DEM is often used to model applications in the mining and quarrying industries, a calibration procedure was developed for materials that consist of relatively large (up to 40 mm in size) particles. A coarse crushed aggregate was used as the test material. Using a specially designed large shear box with a diameter of 590 mm, the confined Young’s modulus (bulk stiffness) and internal friction angle of the material were measured by means of the confined compression test and the direct shear test respectively. DEM models of the experimental setup were developed and the input parameter values were varied iteratively until a close correlation between the experimental and numerical results was achieved. The calibration process was validated by modelling the pull-out of an anchor from a bed of material. The model results compared well with experimental measurement.

Keywords: Discrete Element Method (DEM), calibration, shear box, anchor pull-out

Procedia PDF Downloads 290
320 Comparative Study of Seismic Isolation as Retrofit Method for Historical Constructions

Authors: Carlos H. Cuadra

Abstract:

Seismic isolation can be used as a retrofit method for historical buildings with the advantage that minimum intervention on super-structure is required. However, selection of isolation devices depends on weight and stiffness of upper structure. In this study, two buildings are considered for analyses to evaluate the applicability of this retrofitting methodology. Both buildings are located at Akita prefecture in the north part of Japan. One building is a wooden structure that corresponds to the old council meeting hall of Noshiro city. The second building is a brick masonry structure that was used as house of a foreign mining engineer and it is located at Ani town. Ambient vibration measurements were performed on both buildings to estimate their dynamic characteristics. Then, target period of vibration of isolated systems is selected as 3 seconds is selected to estimate required stiffness of isolation devices. For wooden structure, which is a light construction, it was found that natural rubber isolators in combination with friction bearings are suitable for seismic isolation. In case of masonry building elastomeric isolator can be used for its seismic isolation. Lumped mass systems are used for seismic response analysis and it is verified in both cases that seismic isolation can be used as retrofitting method of historical construction. However, in the case of the light building, most of the weight corresponds to the reinforced concrete slab that is required to install isolation devices.

Keywords: historical building, finite element method, masonry structure, seismic isolation, wooden structure

Procedia PDF Downloads 151
319 Foreign Exchange Volatilities and Stock Prices: Evidence from London Stock Exchange

Authors: Mahdi Karazmodeh, Pooyan Jafari

Abstract:

One of the most interesting topics in finance is the relation between stock prices and exchange rates. During the past decades different stock markets in different countries have been the subject of study for researches. The volatilities of exchange rates and its effect on stock prices during the past 10 years have continued to be an attractive research topic. The subject of this study is one of the most important indices, FTSE 100. 20 firms with the highest market capitalization in 5 different industries are chosen. Firms are included in oil and gas, mining, pharmaceuticals, banking and food related industries. 5 different criteria have been introduced to evaluate the relationship between stock markets and exchange rates. Return of market portfolio, returns on broad index of Sterling are also introduced. The results state that not all firms are sensitive to changes in exchange rates. Furthermore, a Granger Causality test has been run to observe the route of changes between stock prices and foreign exchange rates. The results are consistent, to some level, with the previous studies. However, since the number of firms is not large, it is suggested that a larger number of firms being used to achieve the best results. However results showed that not all firms are affected by foreign exchange rates changes. After testing Granger Causality, this study found out that in some industries (oil and gas, pharmaceuticals), changes in foreign exchange rate will not cause any changes in stock prices (or vice versa), however, in banking sector the situation was different. This industry showed more reaction to these changes. The results are similar to the ones with Richards and Noel, where a variety of firms in different industries were evaluated.

Keywords: stock prices, foreign exchange rate, exchange rate exposure, Granger Causality

Procedia PDF Downloads 441
318 Information Communication Technology Based Road Traffic Accidents’ Identification, and Related Smart Solution Utilizing Big Data

Authors: Ghulam Haider Haidaree, Nsenda Lukumwena

Abstract:

Today the world of research enjoys abundant data, available in virtually any field, technology, science, and business, politics, etc. This is commonly referred to as big data. This offers a great deal of precision and accuracy, supportive of an in-depth look at any decision-making process. When and if well used, Big Data affords its users with the opportunity to produce substantially well supported and good results. This paper leans extensively on big data to investigate possible smart solutions to urban mobility and related issues, namely road traffic accidents, its casualties, and fatalities based on multiple factors, including age, gender, location occurrences of accidents, etc. Multiple technologies were used in combination to produce an Information Communication Technology (ICT) based solution with embedded technology. Those technologies include principally Geographic Information System (GIS), Orange Data Mining Software, Bayesian Statistics, to name a few. The study uses the Leeds accident 2016 to illustrate the thinking process and extracts thereof a model that can be tested, evaluated, and replicated. The authors optimistically believe that the proposed model will significantly and smartly help to flatten the curve of road traffic accidents in the fast-growing population densities, which increases considerably motor-based mobility.

Keywords: accident factors, geographic information system, information communication technology, mobility

Procedia PDF Downloads 206
317 Improved Classification Procedure for Imbalanced and Overlapped Situations

Authors: Hankyu Lee, Seoung Bum Kim

Abstract:

The issue with imbalance and overlapping in the class distribution becomes important in various applications of data mining. The imbalanced dataset is a special case in classification problems in which the number of observations of one class (i.e., major class) heavily exceeds the number of observations of the other class (i.e., minor class). Overlapped dataset is the case where many observations are shared together between the two classes. Imbalanced and overlapped data can be frequently found in many real examples including fraud and abuse patients in healthcare, quality prediction in manufacturing, text classification, oil spill detection, remote sensing, and so on. The class imbalance and overlap problem is the challenging issue because this situation degrades the performance of most of the standard classification algorithms. In this study, we propose a classification procedure that can effectively handle imbalanced and overlapped datasets by splitting data space into three parts: nonoverlapping, light overlapping, and severe overlapping and applying the classification algorithm in each part. These three parts were determined based on the Hausdorff distance and the margin of the modified support vector machine. An experiments study was conducted to examine the properties of the proposed method and compared it with other classification algorithms. The results showed that the proposed method outperformed the competitors under various imbalanced and overlapped situations. Moreover, the applicability of the proposed method was demonstrated through the experiment with real data.

Keywords: classification, imbalanced data with class overlap, split data space, support vector machine

Procedia PDF Downloads 305
316 Laboratory Scale Experimental Studies on CO₂ Based Underground Coal Gasification in Context of Clean Coal Technology

Authors: Geeta Kumari, Prabu Vairakannu

Abstract:

Coal is the largest fossil fuel. In India, around 37 % of coal resources found at a depth of more than 300 meters. In India, more than 70% of electricity production depends on coal. Coal on combustion produces greenhouse and pollutant gases such as CO₂, SOₓ, NOₓ, and H₂S etc. Underground coal gasification (UCG) technology is an efficient and an economic in-situ clean coal technology, which converts these unmineable coals into valuable calorific gases. The UCG syngas (mainly H₂, CO, CH₄ and some lighter hydrocarbons) which can utilized for the production of electricity and manufacturing of various useful chemical feedstock. It is an inherent clean coal technology as it avoids ash disposal, mining, transportation and storage problems. Gasification of underground coal using steam as a gasifying medium is not an easy process because sending superheated steam to deep underground coal leads to major transportation difficulties and cost effective. Therefore, for reducing this problem, we have used CO₂ as a gasifying medium, which is a major greenhouse gas. This paper focus laboratory scale underground coal gasification experiment on a coal block by using CO₂ as a gasifying medium. In the present experiment, first, we inject oxygen for combustion for 1 hour and when the temperature of the zones reached to more than 1000 ºC, and then we started supplying of CO₂ as a gasifying medium. The gasification experiment was performed at an atmospheric pressure of CO₂, and it was found that the amount of CO produced due to Boudouard reaction (C+CO₂  2CO) is around 35%. The experiment conducted to almost 5 hours. The maximum gas composition observed, 35% CO, 22 % H₂, and 11% CH4 with LHV 248.1 kJ/mol at CO₂/O₂ ratio 0.4 by volume.

Keywords: underground coal gasification, clean coal technology, calorific value, syngas

Procedia PDF Downloads 225
315 Spatial Information and Urbanizing Futures

Authors: Mohammad Talei, Neda Ranjbar Nosheri, Reza Kazemi Gorzadini

Abstract:

Today municipalities are searching for the new tools for increasing the public participation in different levels of urban planning. This approach of urban planning involves the community in planning process using participatory approaches instead of the long traditional top-down planning methods. These tools can be used to obtain the particular problems of urban furniture form the residents’ point of view. One of the tools that is designed with this goal is public participation GIS (PPGIS) that enables citizen to record and following up their feeling and spatial knowledge regarding main problems of the city, specifically urban furniture, in the form of maps. However, despite the good intentions of PPGIS, its practical implementation in developing countries faces many problems including the lack of basic supporting infrastructure and services and unavailability of sophisticated public participatory models. In this research we develop a PPGIS using of Web 2 to collect voluntary geodataand to perform spatial analysis based on Spatial OnLine Analytical Processing (SOLAP) and Spatial Data Mining (SDM). These tools provide urban planners with proper informationregarding the type, spatial distribution and the clusters of reported problems. This system is implemented in a case study area in Tehran, Iran and the challenges to make it applicable and its potential for real urban planning have been evaluated. It helps decision makers to better understand, plan and allocate scarce resources for providing most requested urban furniture.

Keywords: PPGIS, spatial information, urbanizing futures, urban planning

Procedia PDF Downloads 721
314 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images

Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi

Abstract:

Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.

Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis

Procedia PDF Downloads 52
313 Educational Leadership and Artificial Intelligence

Authors: Sultan Ghaleb Aldaihani

Abstract:

- The environment in which educational leadership takes place is becoming increasingly complex due to factors like globalization and rapid technological change. - This is creating a "leadership gap" where the complexity of the environment outpaces the ability of leaders to effectively respond. - Educational leadership involves guiding teachers and the broader school system towards improved student learning and achievement. 2. Implications of Artificial Intelligence (AI) in Educational Leadership: - AI has great potential to enhance education, such as through intelligent tutoring systems and automating routine tasks to free up teachers. - AI can also have significant implications for educational leadership by providing better information and data-driven decision-making capabilities. - Computer-adaptive testing can provide detailed, individualized data on student learning that leaders can use for instructional decisions and accountability. 3. Enhancing Decision-Making Processes: - Statistical models and data mining techniques can help identify at-risk students earlier, allowing for targeted interventions. - Probability-based models can diagnose students likely to drop out, enabling proactive support. - These data-driven approaches can make resource allocation and decision-making more effective. 4. Improving Efficiency and Productivity: - AI systems can automate tasks and change processes to improve the efficiency of educational leadership and administration. - Integrating AI can free up leaders to focus more on their role's human, interactive elements.

Keywords: Education, Leadership, Technology, Artificial Intelligence

Procedia PDF Downloads 31
312 The Use of Five Times Sit-To-Stand Test in Ambulatory People with Spinal Cord Injury When Tested with or without Hands

Authors: Lalita Khuna, Sugalya Amatachaya, Pipatana Amatachaya, Thiwabhorn Thaweewannakij, Pattra Wattanapan

Abstract:

The five times sit-to-stand test (FTSST) has been widely used to quantify lower extremity motor strength (LEMS), dynamic balance ability, and risk of falls in many individuals. Recently, it has been used in ambulatory patients with spinal cord injury (SCI) but variously using with or without hands according to patients’ ability. This difference might affect the validity of the test in these individuals. Thus, this study assessed the concurrent validity of the FTSST in ambulatory individuals with SCI, separately for those who could complete the test with or without hands using LEMS and standard functional measures as gold standards. Moreover, the data of the tests from those who completed the FTSST with and without hands were compared. A total of 56 ambulatory participants with SCI who could complete sit-to-stand with or without hands were assessed for the time to complete the FTSST according to their ability. Then they were assessed for their LEMS scores and functional abilities, including the 10-meter walk test (10MWT), the walking index for spinal cord injury II (WISCI II), the timed up and go test (TUGT), and the 6-minute walk test (6MWT). The Mann-Whitney U test was used to compare the different findings between the participants who performed the FTSST with and without hands. The Spearman rank correlation coefficient (ρ) was applied to analyze the levels of correlation between the FTSST and standard tests (LEMS scores and functional measures). There were significant differences in the data between the participants who performed the test with and without hands (p < 0.01). The time to complete the FTSST of the participants who performed the test without hands showed moderate to strong correlation with total LEMS scores and all functional measures (ρ = -0.71 to 0.69, p < 0.001). On the contrary, the FTSST data of those who performed the test with hands were significantly correlated only with the 10MWT, TUGT, and 6MWT (ρ = -0.47 to 0.57, p < 0.01). The present findings confirm the concurrent validity of the FTSST when performed without hands for LEMS and functional mobility necessary for the ability of independence and safety of ambulatory individuals with SCI. However, the test using hands distort the ability of the outcomes to reflect LEMS and WISCI II that reflect lower limb functions. By contrast, the 10MWT, TUGT, and 6MWT allowed upper limb contribution in the tests. Therefore, outcomes of these tests showed a significant correlation to the outcomes of FTSST when assessed using hands. Consequently, the use of FTSST with or without hands needs to consider the clinical application of the outcomes, i.e., to reflect lower limb functions or mobility of the patients.

Keywords: mobility, lower limb muscle strength, clinical test, rehabilitation

Procedia PDF Downloads 147
311 A Low-Cost Memristor Based on Hybrid Structures of Metal-Oxide Quantum Dots and Thin Films

Authors: Amir Shariffar, Haider Salman, Tanveer Siddique, Omar Manasreh

Abstract:

According to the recent studies on metal-oxide memristors, researchers tend to improve the stability, endurance, and uniformity of resistive switching (RS) behavior in memristors. Specifically, the main challenge is to prevent abrupt ruptures in the memristor’s filament during the RS process. To address this problem, we are proposing a low-cost hybrid structure of metal oxide quantum dots (QDs) and thin films to control the formation of filaments in memristors. We aim to use metal oxide quantum dots because of their unique electronic properties and quantum confinement, which may improve the resistive switching behavior. QDs have discrete energy spectra due to electron confinement in three-dimensional space. Because of Coulomb repulsion between electrons, only a few free electrons are contained in a quantum dot. This fact might guide the growth direction for the conducting filaments in the metal oxide memristor. As a result, it is expected that QDs can improve the endurance and uniformity of RS behavior in memristors. Moreover, we use a hybrid structure of intrinsic n-type quantum dots and p-type thin films to introduce a potential barrier at the junction that can smooth the transition between high and low resistance states. A bottom-up approach is used for fabricating the proposed memristor using different types of metal-oxide QDs and thin films. We synthesize QDs including, zinc oxide, molybdenum trioxide, and nickel oxide combined with spin-coated thin films of titanium dioxide, copper oxide, and hafnium dioxide. We employ fluorine-doped tin oxide (FTO) coated glass as the substrate for deposition and bottom electrode. Then, the active layer composed of one type of quantum dots, and the opposite type of thin films is spin-coated onto the FTO. Lastly, circular gold electrodes are deposited with a shadow mask by using electron-beam (e-beam) evaporation at room temperature. The fabricated devices are characterized using a probe station with a semiconductor parameter analyzer. The current-voltage (I-V) characterization is analyzed for each device to determine the conduction mechanism. We evaluate the memristor’s performance in terms of stability, endurance, and retention time to identify the optimal memristive structure. Finally, we assess the proposed hypothesis before we proceed to the optimization process for fabricating the memristor.

Keywords: memristor, quantum dot, resistive switching, thin film

Procedia PDF Downloads 118
310 Preliminary Study of the Hydrothermal Polymetallic Ore Deposit at the Karancs Mountain, North-East Hungary

Authors: Eszter Kulcsar, Agnes Takacs, Gabriella B. Kiss, Peter Prakfalvi

Abstract:

The Karancs Mountain is part of the Miocene Inner Carpathian Volcanic Belt and is located in N-NE Hungary, along the Hungarian-Slovakian border. The 14 Ma old andesitic-dacitic units are surrounded by Oligocene sedimentary units (sandstone, siltstone). The host rocks of the mineralisation are siliceous and/or argillaceous volcanic units, quartz veins, hydrothermal breccia, and strongly silicified vuggy rocks, found in the various altered volcanic units. The hydrothermal breccia consists of highly silicified vuggy quartz clasts in quartz matrix. The hydrothermal alteration of the host units shows structural control at the deeper levels. The main ore minerals are galena, pyrite, marcasite, sphalerite, hematite, magnetite, arsenopyrite, anglesite and argentite The mineralisation was first mentioned in 1944 and the first exploration took place between 1961 and 1962 in the area. The first ore geological studies were performed between 1984-1985. The exploration programme was limited only to surface sampling; no drilling programme was performed. Petrographical and preliminary fluid inclusion studies were performed on calcite samples from a galena-bearing vein. Despite the early discovery of the mineralisation, no detailed description is available, thus its size, characteristics, and origin have remained unknown. The aim of this study is to examine the mineralisation, describe the characteristics in detail and to test the possible gold content of the various quartz veins and breccias. Finally, we also investigate the potential relation of the hydrothermal mineralisation to the surrounding similar mineralisations with similar ages (e.g. W-Mátra Mountains in Hungary, Banska Bystrica, Banska Stiavnica in Slovakia) in order to place the mineralisation within the volcanic-hydrothermal evolution of the Miocene Inner Carpathian Belt. As first steps, the study includes field mapping, traditional petrological and ore microscopy; X-ray diffraction analysis; SEM-EDS and EMPA studies on ore minerals, to obtain mineral chemical information. Fluid inclusion petrography and microthermometry and micro-Raman-spectroscopy studies are also planned on quartz-hosted inclusions to investigate the physical and chemical properties of the ore-forming fluid.

Keywords: epithermal, Karancs Mountain, Hungary, Miocene Inner Carpathian volcanic belt, polimetallic ore deposit

Procedia PDF Downloads 128
309 The “Bright Side” of COVID-19: Effects of Livestream Affordances on Consumer Purchase Willingness: Explicit IT Affordances Perspective

Authors: Isaac Owusu Asante, Yushi Jiang, Hailin Tao

Abstract:

Live streaming marketing, the new electronic commerce element, became an optional marketing channel following the COVID-19 pandemic. Many sellers have leveraged the features presented by live streaming to increase sales. Studies on live streaming have focused on gaming and consumers’ loyalty to brands through live streaming, using interview questionnaires. This study, however, was conducted to measure real-time observable interactions between consumers and sellers. Based on the affordance theory, this study conceptualized constructs representing the interactive features and examined how they drive consumers’ purchase willingness during live streaming sessions using 1238 datasets from Amazon Live, following the manual observation of transaction records. Using structural equation modeling, the ordinary least square regression suggests that live viewers, new followers, live chats, and likes positively affect purchase willingness. The Sobel and Monte Carlo tests show that new followers, live chats, and likes significantly mediate the relationship between live viewers and purchase willingness. The study introduces a new way of measuring interactions in live streaming commerce and proposes a way to manually gather data on consumer behaviors in live streaming platforms when the application programming interface (API) of such platforms does not support data mining algorithms.

Keywords: livestreaming marketing, live chats, live viewers, likes, new followers, purchase willingness

Procedia PDF Downloads 76
308 A Supervised Approach for Detection of Singleton Spam Reviews

Authors: Atefeh Heydari, Mohammadali Tavakoli, Naomie Salim

Abstract:

In recent years, we have witnessed that online reviews are the most important source of customers’ opinion. They are progressively more used by individuals and organisations to make purchase and business decisions. Unfortunately, for the reason of profit or fame, frauds produce deceptive reviews to hoodwink potential customers. Their activities mislead not only potential customers to make appropriate purchasing decisions and organisations to reshape their business, but also opinion mining techniques by preventing them from reaching accurate results. Spam reviews could be divided into two main groups, i.e. multiple and singleton spam reviews. Detecting a singleton spam review that is the only review written by a user ID is extremely challenging due to lack of clue for detection purposes. Singleton spam reviews are very harmful and various features and proofs used in multiple spam reviews detection are not applicable in this case. Current research aims to propose a novel supervised technique to detect singleton spam reviews. To achieve this, various features are proposed in this study and are to be combined with the most appropriate features extracted from literature and employed in a classifier. In order to compare the performance of different classifiers, SVM and naive Bayes classification algorithms were used for model building. The results revealed that SVM was more accurate than naive Bayes and our proposed technique is capable to detect singleton spam reviews effectively.

Keywords: classification algorithms, Naïve Bayes, opinion review spam detection, singleton review spam detection, support vector machine

Procedia PDF Downloads 304
307 Developing a Place-Name Gazetteer for Singapore by Mining Historical Planning Archives and Selective Crowd-Sourcing

Authors: Kevin F. Hsu, Alvin Chua, Sarah X. Lin

Abstract:

As a multilingual society, Singaporean names for different parts of the city have changed over time. Residents included Indigenous Malays, dialect-speakers from China, European settler-colonists, and Tamil-speakers from South India. Each group would name locations in their own languages. Today, as ancestral tongues are increasingly supplanted by English, contemporary Singaporeans’ understanding of once-common place names is disappearing. After demolition or redevelopment, some urban places will only exist in archival records or in human memory. United Nations conferences on the standardization of geographic names have called attention to how place names relate to identity, well-being, and a sense of belonging. The Singapore Place-Naming Project responds to these imperatives by capturing past and present place names through digitizing historical maps, mining archival records, and applying selective crowd-sourcing to trace the evolution of place names throughout the city. The project ensures that both formal and vernacular geographical names remain accessible to historians, city planners, and the public. The project is compiling a gazetteer, a geospatial archive of placenames, with streets, buildings, landmarks, and other points of interest (POI) appearing in the historic maps and planning documents of Singapore, currently held by the National Archives of Singapore, the National Library Board, university departments, and the Urban Redevelopment Authority. To create a spatial layer of information, the project links each place name to either a geo-referenced point, line segment, or polygon, along with the original source material in which the name appears. This record is supplemented by crowd-sourced contributions from civil service officers and heritage specialists, drawing from their collective memory to (1) define geospatial boundaries of historic places that appear in past documents, but maybe unfamiliar to users today, and (2) identify and record vernacular place names not captured in formal planning documents. An intuitive interface allows participants to demarcate feature classes, vernacular phrasings, time periods, and other knowledge related to historical or forgotten spaces. Participants are stratified into age bands and ethnicity to improve representativeness. Future iterations could allow additional public contributions. Names reveal meanings that communities assign to each place. While existing historical maps of Singapore allow users to toggle between present-day and historical raster files, this project goes a step further by adding layers of social understanding and planning documents. Tracking place names illuminates linguistic, cultural, commercial, and demographic shifts in Singapore, in the context of transformations of the urban environment. The project also demonstrates how a moderated, selectively crowd-sourced effort can solicit useful geospatial data at scale, sourced from different generations, and at higher granularity than traditional surveys, while mitigating negative impacts of unmoderated crowd-sourcing. Stakeholder agencies believe the project will achieve several objectives, including Supporting heritage conservation and public education; Safeguarding intangible cultural heritage; Providing historical context for street, place or development-renaming requests; Enhancing place-making with deeper historical knowledge; Facilitating emergency and social services by tagging legal addresses to vernacular place names; Encouraging public engagement with heritage by eliciting multi-stakeholder input.

Keywords: collective memory, crowd-sourced, digital heritage, geospatial, geographical names, linguistic heritage, place-naming, Singapore, Southeast Asia

Procedia PDF Downloads 122