Search results for: urban data model
35618 Comparison of Wake Oscillator Models to Predict Vortex-Induced Vibration of Tall Chimneys
Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta
Abstract:
The present study compares the semi-empirical wake-oscillator models that are used to predict vortex-induced vibration of structures. These models include those proposed by Facchinetti, Farshidian, and Dolatabadi, and Skop and Griffin. These models combine a wake oscillator model resembling the Van der Pol oscillator model and a single degree of freedom oscillation model. In order to use these models for estimating the top displacement of chimneys, the first mode vibration of the chimneys is only considered. The modal equation of the chimney constitutes the single degree of freedom model (SDOF). The equations of the wake oscillator model and the SDOF are simultaneously solved using an iterative procedure. The empirical parameters used in the wake-oscillator models are estimated using a newly developed approach, and response is compared with experimental data, which appeared comparable. For carrying out the iterative solution, the ode solver of MATLAB is used. To carry out the comparative study, a tall concrete chimney of height 210m has been chosen with the base diameter as 28m, top diameter as 20m, and thickness as 0.3m. The responses of the chimney are also determined using the linear model proposed by E. Simiu and the deterministic model given in Eurocode. It is observed from the comparative study that the responses predicted by the Facchinetti model and the model proposed by Skop and Griffin are nearly the same, while the model proposed by Fashidian and Dolatabadi predicts a higher response. The linear model without considering the aero-elastic phenomenon provides a less response as compared to the non-linear models. Further, for large damping, the prediction of the response by the Euro code is relatively well compared to those of non-linear models.Keywords: chimney, deterministic model, van der pol, vortex-induced vibration
Procedia PDF Downloads 22135617 A Gastro-Intestinal Model for a Rational Design of in vitro Systems to Study Drugs Bioavailability
Authors: Pompa Marcello, Mauro Capocelli, Vincenzo Piemonte
Abstract:
This work focuses on a mathematical model able to describe the gastro-intestinal physiology and providing a rational tool for the design of an artificial gastro-intestinal system. This latter is mainly devoted to analyse the absorption and bioavailability of drugs and nutrients through in vitro tests in order to overcome (or, at least, to partially replace) in vivo trials. The provided model realizes a conjunction ring (with extended prediction capability) between in vivo tests and mechanical-laboratory models emulating the human body. On this basis, no empirical equations controlling the gastric emptying are implemented in this model as frequent in the cited literature and all the sub-unit and the related system of equations are physiologically based. More in detail, the model structure consists of six compartments (stomach, duodenum, jejunum, ileum, colon and blood) interconnected through pipes and valves. Paracetamol, Ketoprofen, Irbesartan and Ketoconazole are considered and analysed in this work as reference drugs. The mathematical model has been validated against in vivo literature data. Results obtained show a very good model reliability and highlight the possibility to realize tailored simulations for different couples patient-drug, including food adsorption dynamics.Keywords: gastro-intestinal model, drugs bioavailability, paracetamol, ketoprofen
Procedia PDF Downloads 16935616 Adsorption of Malachite Green Dye on Graphene Oxide Nanosheets from Aqueous Solution: Kinetics and Thermodynamics Studies
Authors: Abeer S. Elsherbiny, Ali H. Gemeay
Abstract:
In this study, graphene oxide (GO) nanosheets have been synthesized and characterized using different spectroscopic tools such as X-ray diffraction spectroscopy, infrared Fourier transform (FT-IR) spectroscopy, BET specific surface area and Transmission Electronic Microscope (TEM). The prepared GO was investigated for the removal of malachite green, a cationic dye from aqueous solution. The removal methods of malachite green has been proceeded via adsorption process. GO nanosheets can be predicted as a good adsorbent material for the adsorption of cationic species. The adsorption of the malachite green onto the GO nanosheets has been carried out at different experimental conditions such as adsorption kinetics, concentration of adsorbate, pH, and temperature. The kinetics of the adsorption data were analyzed using four kinetic models such as the pseudo first-order model, pseudo second-order model, intraparticle diffusion, and the Boyd model to understand the adsorption behavior of malachite green onto the GO nanosheets and the mechanism of adsorption. The adsorption isotherm of adsorption of the malachite green onto the GO nanosheets has been investigated at 25, 35 and 45 °C. The equilibrium data were fitted well to the Langmuir model. Various thermodynamic parameters such as the Gibbs free energy (ΔG°), enthalpy (ΔH°), and entropy (ΔS°) change were also evaluated. The interaction of malachite green onto the GO nanosheets has been investigated by infrared Fourier transform (FT-IR) spectroscopy.Keywords: adsorption, graphene oxide, kinetics, malachite green
Procedia PDF Downloads 41135615 A Fuzzy Structural Equation Model for Development of a Safety Performance Index Assessment Tool in Construction Sites
Authors: Murat Gunduz, Mustafa Ozdemir
Abstract:
In this research, a framework is to be proposed to model the safety performance in construction sites. Determinants of safety performance are to be defined through extensive literature review and a multidimensional safety performance model is to be developed. In this context, a questionnaire is to be administered to construction companies with sites. The collected data through questionnaires including linguistic terms are then to be defuzzified to get concrete numbers by using fuzzy set theory which provides strong and significant instruments for the measurement of ambiguities and provides the opportunity to meaningfully represent concepts expressed in the natural language. The validity of the proposed safety performance model, relationships between determinants of safety performance are to be analyzed using the structural equation modeling (SEM) which is a highly strong multi variable analysis technique that makes possible the evaluation of latent structures. After validation of the model, a safety performance index assessment tool is to be proposed by the help of software. The proposed safety performance assessment tool will be based on the empirically validated theoretical model.Keywords: Fuzzy set theory, safety performance assessment, safety index, structural equation modeling (SEM), construction sites
Procedia PDF Downloads 52235614 Determining the Spatial Vulnerability Levels and Typologies of Coastal Cities to Climate Change: Case of Turkey
Authors: Mediha B. Sılaydın Aydın, Emine D. Kahraman
Abstract:
One of the important impacts of climate change is the sea level rise. Turkey is a peninsula, so the coastal areas of the country are threatened by the problem of sea level rise. Therefore, the urbanized coastal areas are highly vulnerable to climate change. At the aim of enhancing spatial resilience of urbanized areas, this question arises: What should be the priority intervention subject in the urban planning process for a given city. To answer this question, by focusing on the problem of sea level rise, this study aims to determine spatial vulnerability typologies and levels of Turkey coastal cities based on morphological, physical and social characteristics. As a method, spatial vulnerability of coastal cities is determined by two steps as level and type. Firstly, physical structure, morphological structure and social structure were examined in determining spatial vulnerability levels. By determining these levels, most vulnerable areas were revealed as a priority in adaptation studies. Secondly, all parameters are also used to determine spatial typologies. Typologies are determined for coastal cities in order to use as a base for urban planning studies. Adaptation to climate change is crucial for developing countries like Turkey so, this methodology and created typologies could be a guide for urban planners as spatial directors and an example for other developing countries in the context of adaptation to climate change. The results demonstrate that the urban settlements located on the coasts of the Marmara Sea, the Aegean Sea and the Mediterranean respectively, are more vulnerable than the cities located on the Black Sea’s coasts to sea level rise.Keywords: climate change, coastal cities, vulnerability, urban land use planning
Procedia PDF Downloads 32735613 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices
Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu
Abstract:
Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction
Procedia PDF Downloads 10535612 A Cross-Sectional Assessment of Maternal Food Insecurity in Urban Settings
Authors: Theresia F. Mrema, Innocent Semali
Abstract:
Food insecurity to pregnant women seriously impedes efforts to reduce maternal mortality in resource poor countries. This study was carried out to assess determinants food insecurity among pregnant women in urban areas. A cross sectional study design was used to collect data for the period of two weeks. A structured questionnaire with both closed and open ended questions was used to interview a total of 225 randomly selected pregnant women who attend the three randomly selected antenatal care clinics in Temeke Municipal council. The food insecurity was measured using a modified version of the USDA’s core food security module which consists of 15questions. Logistic regression analysis was used to obtain strength of association between dependent and independent variables. Among 225 pregnant women attending antenatal care (ANC) interviewed 55.1% were food insecure. Food insecurity declined with increasing household wealth, it was also significantly low among those with less than three children compared with having more. Low level of food insecurity was associated with having Secondary education (Adjusted OR=0.24; 95%CI, 0.12–0.48), College Education (OR=0.156; 95%CI, 0.05-0.46), paid employment (OR=0.322; 95%CI, 0.11-0.96) and high income (OR=0.031; 95%CI, 0.01–0.07). Also, having head of the household with secondary education (OR=0.51; 95%CI, 0.07-0.32) college education (OR=0.04; 95%CI, 0.01-0.13) and paid employment (OR=0.225; 95%CI, 0.12-0.42). Food insecurity is a significant problem among pregnant women in Temeke Municipal which might significantly affect health of the pregnant woman and foetus due to higher maternal malnutrition which increases risk of miscarriage, maternal and infant mortality, and poor pregnancy outcomes. The study suggests a multi-sectoral approach in order to address this problem.Keywords: food security, nutrition, pregnant women, urban settings
Procedia PDF Downloads 35635611 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome
Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler
Abstract:
Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model
Procedia PDF Downloads 15335610 Social Enterprise Concept in Sustaining Agro-Industry Development in Indonesia: Case Study of Yourgood Social Business
Authors: Koko Iwan Agus Kurniawan, Dwi Purnomo, Anas Bunyamin, Arif Rahman Jaya
Abstract:
Fruters model is a concept of technopreneurship-based on empowerment, in which technology research results were designed to create high value-added products and implemented as a locomotive of collaborative empowerment; thereby, the impact was widely spread. This model still needs to be inventoried and validated concerning the influenced variables in the business growth process. Model validation accompanied by mapping was required to be applicable to Small Medium Enterprises (SMEs) agro-industry based on sustainable social business and existing real cases. This research explained the empowerment model of Yourgood, an SME, which emphasized on empowering the farmers/ breeders in farmers in rural areas, Cipageran, Cimahi, to housewives in urban areas, Bandung, West Java, Indonesia. This research reviewed some works of literature discussing the agro-industrial development associated with the empowerment and social business process and gained a unique business model picture with the social business platform as well. Through the mapped business model, there were several advantages such as technology acquisition, independence, capital generation, good investment growth, strengthening of collaboration, and improvement of social impacts that can be replicated on other businesses. This research used analytical-descriptive research method consisting of qualitative analysis with design thinking approach and that of quantitative with the AHP (Analytical Hierarchy Process). Based on the results, the development of the enterprise’s process was highly affected by supplying farmers with the score of 0.248 out of 1, being the most valuable for the existence of the enterprise. It was followed by university (0.178), supplying farmers (0.153), business actors (0.128), government (0.100), distributor (0.092), techno-preneurship laboratory (0.069), banking (0.033), and Non-Government Organization (NGO) (0.031).Keywords: agro-industry, small medium enterprises, empowerment, design thinking, AHP, business model canvas, social business
Procedia PDF Downloads 16935609 The Use of Remotely Sensed Data to Model Habitat Selections of Pileated Woodpeckers (Dryocopus pileatus) in Fragmented Landscapes
Authors: Ruijia Hu, Susanna T.Y. Tong
Abstract:
Light detection and ranging (LiDAR) and four-channel red, green, blue, and near-infrared (RGBI) remote sensed imageries allow an accurate quantification and contiguous measurement of vegetation characteristics and forest structures. This information facilitates the generation of habitat structure variables for forest species distribution modelling. However, applications of remote sensing data, especially the combination of structural and spectral information, to support evidence-based decisions in forest managements and conservation practices at local scale are not widely adopted. In this study, we examined the habitat requirements of pileated woodpecker (Dryocopus pileatus) (PW) in Hamilton County, Ohio, using ecologically relevant forest structural and vegetation characteristics derived from LiDAR and RGBI data. We hypothesized that the habitat of PW is shaped by vegetation characteristics that are directly associated with the availability of food, hiding and nesting resources, the spatial arrangement of habitat patches within home range, as well as proximity to water sources. We used 186 PW presence or absence locations to model their presence and absence in generalized additive model (GAM) at two scales, representing foraging and home range size, respectively. The results confirm PW’s preference for tall and large mature stands with structural complexity, typical of late-successional or old-growth forests. Besides, the crown size of dead trees shows a positive relationship with PW occurrence, therefore indicating the importance of declining living trees or early-stage dead trees within PW home range. These locations are preferred by PW for nest cavity excavation as it attempts to balance the ease of excavation and tree security. In addition, we found that PW can adjust its travel distance to the nearest water resource, suggesting that habitat fragmentation can have certain impacts on PW. Based on our findings, we recommend that forest managers should use different priorities to manage nesting, roosting, and feeding habitats. Particularly, when devising forest management and hazard tree removal plans, one needs to consider retaining enough cavity trees within high-quality PW habitat. By mapping PW habitat suitability for the study area, we highlight the importance of riparian corridor in facilitating PW to adjust to the fragmented urban landscape. Indeed, habitat improvement for PW in the study area could be achieved by conserving riparian corridors and promoting riparian forest succession along major rivers in Hamilton County.Keywords: deadwood detection, generalized additive model, individual tree crown delineation, LiDAR, pileated woodpecker, RGBI aerial imagery, species distribution models
Procedia PDF Downloads 5335608 Machine Learning Model to Predict TB Bacteria-Resistant Drugs from TB Isolates
Authors: Rosa Tsegaye Aga, Xuan Jiang, Pavel Vazquez Faci, Siqing Liu, Simon Rayner, Endalkachew Alemu, Markos Abebe
Abstract:
Tuberculosis (TB) is a major cause of disease globally. In most cases, TB is treatable and curable, but only with the proper treatment. There is a time when drug-resistant TB occurs when bacteria become resistant to the drugs that are used to treat TB. Current strategies to identify drug-resistant TB bacteria are laboratory-based, and it takes a longer time to identify the drug-resistant bacteria and treat the patient accordingly. But machine learning (ML) and data science approaches can offer new approaches to the problem. In this study, we propose to develop an ML-based model to predict the antibiotic resistance phenotypes of TB isolates in minutes and give the right treatment to the patient immediately. The study has been using the whole genome sequence (WGS) of TB isolates as training data that have been extracted from the NCBI repository and contain different countries’ samples to build the ML models. The reason that different countries’ samples have been included is to generalize the large group of TB isolates from different regions in the world. This supports the model to train different behaviors of the TB bacteria and makes the model robust. The model training has been considering three pieces of information that have been extracted from the WGS data to train the model. These are all variants that have been found within the candidate genes (F1), predetermined resistance-associated variants (F2), and only resistance-associated gene information for the particular drug. Two major datasets have been constructed using these three information. F1 and F2 information have been considered as two independent datasets, and the third information is used as a class to label the two datasets. Five machine learning algorithms have been considered to train the model. These are Support Vector Machine (SVM), Random forest (RF), Logistic regression (LR), Gradient Boosting, and Ada boost algorithms. The models have been trained on the datasets F1, F2, and F1F2 that is the F1 and the F2 dataset merged. Additionally, an ensemble approach has been used to train the model. The ensemble approach has been considered to run F1 and F2 datasets on gradient boosting algorithm and use the output as one dataset that is called F1F2 ensemble dataset and train a model using this dataset on the five algorithms. As the experiment shows, the ensemble approach model that has been trained on the Gradient Boosting algorithm outperformed the rest of the models. In conclusion, this study suggests the ensemble approach, that is, the RF + Gradient boosting model, to predict the antibiotic resistance phenotypes of TB isolates by outperforming the rest of the models.Keywords: machine learning, MTB, WGS, drug resistant TB
Procedia PDF Downloads 5235607 A Pattern Recognition Neural Network Model for Detection and Classification of SQL Injection Attacks
Authors: Naghmeh Moradpoor Sheykhkanloo
Abstract:
Structured Query Language Injection (SQLI) attack is a code injection technique in which malicious SQL statements are inserted into a given SQL database by simply using a web browser. Losing data, disclosing confidential information or even changing the value of data are the severe damages that SQLI attack can cause on a given database. SQLI attack has also been rated as the number-one attack among top ten web application threats on Open Web Application Security Project (OWASP). OWASP is an open community dedicated to enabling organisations to consider, develop, obtain, function, and preserve applications that can be trusted. In this paper, we propose an effective pattern recognition neural network model for detection and classification of SQLI attacks. The proposed model is built from three main elements of: a Uniform Resource Locator (URL) generator in order to generate thousands of malicious and benign URLs, a URL classifier in order to: 1) classify each generated URL to either a benign URL or a malicious URL and 2) classify the malicious URLs into different SQLI attack categories, and an NN model in order to: 1) detect either a given URL is a malicious URL or a benign URL and 2) identify the type of SQLI attack for each malicious URL. The model is first trained and then evaluated by employing thousands of benign and malicious URLs. The results of the experiments are presented in order to demonstrate the effectiveness of the proposed approach.Keywords: neural networks, pattern recognition, SQL injection attacks, SQL injection attack classification, SQL injection attack detection
Procedia PDF Downloads 46935606 A Structuring and Classification Method for Assigning Application Areas to Suitable Digital Factory Models
Authors: R. Hellmuth
Abstract:
The method of factory planning has changed a lot, especially when it is about planning the factory building itself. Factory planning has the task of designing products, plants, processes, organization, areas, and the building of a factory. Regular restructuring is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity and Ambiguity) lead to more frequent restructuring measures within a factory. A digital factory model is the planning basis for rebuilding measures and becomes an indispensable tool. Furthermore, digital building models are increasingly being used in factories to support facility management and manufacturing processes. The main research question of this paper is, therefore: What kind of digital factory model is suitable for the different areas of application during the operation of a factory? First, different types of digital factory models are investigated, and their properties and usabilities for use cases are analysed. Within the scope of investigation are point cloud models, building information models, photogrammetry models, and these enriched with sensor data are examined. It is investigated which digital models allow a simple integration of sensor data and where the differences are. Subsequently, possible application areas of digital factory models are determined by means of a survey and the respective digital factory models are assigned to the application areas. Finally, an application case from maintenance is selected and implemented with the help of the appropriate digital factory model. It is shown how a completely digitalized maintenance process can be supported by a digital factory model by providing information. Among other purposes, the digital factory model is used for indoor navigation, information provision, and display of sensor data. In summary, the paper shows a structuring of digital factory models that concentrates on the geometric representation of a factory building and its technical facilities. A practical application case is shown and implemented. Thus, the systematic selection of digital factory models with the corresponding application cases is evaluated.Keywords: building information modeling, digital factory model, factory planning, maintenance
Procedia PDF Downloads 11035605 'Call Drop': A Problem for Handover Minimizing the Call Drop Probability Using Analytical and Statistical Method
Authors: Anshul Gupta, T. Shankar
Abstract:
In this paper, we had analyzed the call drop to provide a good quality of service to user. By optimizing it we can increase the coverage area and also the reduction of interference and congestion created in a network. Basically handover is the transfer of call from one cell site to another site during a call. Here we have analyzed the whole network by two method-statistic model and analytic model. In statistic model we have collected all the data of a network during busy hour and normal 24 hours and in analytic model we have the equation through which we have to find the call drop probability. By avoiding unnecessary handovers we can increase the number of calls per hour. The most important parameter is co-efficient of variation on which the whole paper discussed.Keywords: coefficient of variation, mean, standard deviation, call drop probability, handover
Procedia PDF Downloads 49135604 A Meta-Analysis towards an Integrated Framework for Sustainable Urban Transportation within the Concept of Sustainable Cities
Authors: Hande Aladağ, Gökçe Aydın
Abstract:
The world’s population is increasing continuously and rapidly. Moreover, there are other problems such as the decline of natural energy resources, global warming, and environmental pollution. These facts have made sustainability an important and primary topic from future planning perspective. From this perspective, constituting sustainable cities and communities can be considered as one of the key issues in terms of sustainable development goals. The concept of sustainable cities can be evaluated under three headings such as green/sustainable buildings, self – contained cities and sustainable transportation. This study only concentrates on how to form and support a sustainable urban transportation system to contribute to the sustainable urbanization. Urban transportation system inevitably requires many engineering projects with various sizes. Engineering projects generally have four phases, in the following order: Planning, design, construction, operation. The order is valid but there are feedbacks from every phase to every phase in its upstream. In this regard, engineering projects are iterative processes. Sustainability is an integrated and comprehensive concept thus it should be among the primary concerns in every phase of transportation projects. In the study, a meta-analysis will be performed on the related studies in the literature. It is targeted and planned that, as a result of the findings of this meta-analysis, a framework for the list of principles and actions for sustainable transport will be formed. The meta-analysis will be performed to point out and clarify sustainability approaches in every phase of the related engineering projects, with also paying attention to the iterative nature of the process and relative contribution of the action for the outcomes of the sustainable transportation system. However, the analysis will not be limited to the engineering projects, non-engineering solutions will also be included in the meta-analysis. The most important contribution of this study is a determination of the outcomes of a sustainable urban transportation system in terms of energy efficiency, resource preservation and related social, environmental and economic factors. The study is also important because it will give light to the engineering and management approaches to achieve these outcomes.Keywords: meta-analysis, sustainability, sustainable cities, sustainable urban transportation, urban transportation
Procedia PDF Downloads 33235603 Re-Development and Lost Industrial History: Darling Harbour of Sydney
Authors: Ece Kaya
Abstract:
Urban waterfront re-development is a well-established phenomenon internationally since 1960s. In cities throughout the world, old industrial waterfront land is being redeveloped into luxury housing, offices, tourist attractions, cultural amenities and shopping centres. These developments are intended to attract high-income residents, tourists and investors to the city. As urban waterfronts are iconic places for the cities and catalyst for further development. They are often referred as flagship project. In Sydney, the re-development of industrial waterfront has been exposed since 1980s with Darling Harbour Project. Darling Harbour waterfront used to be the main arrival and landing place for commercial and industrial shipping until 1970s. Its urban development has continued since the establishment of the city. It was developed as a major industrial and goods-handling precinct in 1812. This use was continued by the mid-1970s. After becoming a redundant industrial waterfront, the area was ripe for re-development in 1984. Darling Harbour is now one of the world’s fascinating waterfront leisure and entertainment destinations and its transformation has been considered as a success story. It is a contradictory statement for this paper. Data collection was carried out using an extensive archival document analysis. The data was obtained from Australian Institute of Architects, City of Sydney Council Archive, Parramatta Heritage Office, Historic Houses Trust, National Trust, and University of Sydney libraries, State Archive, State Library and Sydney Harbour Foreshore Authority Archives. Public documents, primarily newspaper articles and design plans, were analysed to identify possible differences in motives and to determine the process of implementation of the waterfront redevelopments. It was also important to obtain historical photographs and descriptions to understand how the waterfront had been altered. Sites maps in different time periods have been identified to understand what kind of changes happened on the urban landscape and how the developments affected areas. Newspaper articles and editorials have been examined in order to discover what aspects of the projects reflected the history and heritage. The thematic analysis of the archival data helped determine Darling Harbour is a historically important place as it had represented a focal point for Sydney's industrial growth and the cradle of industrial development in European Australia. It has been found that the development area was designated in order to be transformed to a place for tourist, education, recreational, entertainment, cultural and commercial activities and as a result little evidence remained of its industrial past. This paper aims to discuss the industrial significance of Darling Harbour and to explain the changes on its industrial landscape. What is absent now is the layer of its history that creates the layers of meaning to the place so its historic industrial identity is effectively lost.Keywords: historical significance, industrial heritage, industrial waterfront, re-development
Procedia PDF Downloads 30135602 Indeterminacy: An Urban Design Tool to Measure Resilience to Climate Change, a Caribbean Case Study
Authors: Tapan Kumar Dhar
Abstract:
How well are our city forms designed to adapt to climate change and its resulting uncertainty? What urban design tools can be used to measure and improve resilience to climate change, and how would they do so? In addressing these questions, this paper considers indeterminacy, a concept originated in the resilience literature, to measure the resilience of built environments. In the realm of urban design, ‘indeterminacy’ can be referred to as built-in design capabilities of an urban system to serve different purposes which are not necessarily predetermined. An urban system, particularly that with a higher degree of indeterminacy, can enable the system to be reorganized and changed to accommodate new or unknown functions while coping with uncertainty over time. Underlying principles of this concept have long been discussed in the urban design and planning literature, including open architecture, landscape urbanism, and flexible housing. This paper argues that the concept indeterminacy holds the potential to reduce the impacts of climate change incrementally and proactively. With regard to sustainable development, both planning and climate change literature highly recommend proactive adaptation as it involves less cost, efforts, and energy than last-minute emergency or reactive actions. Nevertheless, the concept still remains isolated from resilience and climate change adaptation discourses even though the discourses advocate the incremental transformation of a system to cope with climatic uncertainty. This paper considers indeterminacy, as an urban design tool, to measure and increase resilience (and adaptive capacity) of Long Bay’s coastal settlements in Negril, Jamaica. Negril is one of the popular tourism destinations in the Caribbean highly vulnerable to sea-level rise and its associated impacts. This paper employs empirical information obtained from direct observation and informal interviews with local people. While testing the tool, this paper deploys an urban morphology study, which includes land use patterns and the physical characteristics of urban form, including street networks, block patterns, and building footprints. The results reveal that most resorts in Long Bay are designed for pre-determined purposes and offer a little potential to use differently if needed. Additionally, Negril’s street networks are found to be rigid and have limited accessibility to different points of interest. This rigidity can expose the entire infrastructure further to extreme climatic events and also impedes recovery actions after a disaster. However, Long Bay still has room for future resilient developments in other relatively less vulnerable areas. In adapting to climate change, indeterminacy can be reached through design that achieves a balance between the degree of vulnerability and the degree of indeterminacy: the more vulnerable a place is, the more indeterminacy is useful. This paper concludes with a set of urban design typologies to increase the resilience of coastal settlements.Keywords: climate change adaptation, resilience, sea-level rise, urban form
Procedia PDF Downloads 36635601 Exploring the Energy Model of Cumulative Grief
Authors: Masica Jordan Alston, Angela N. Bullock, Angela S. Henderson, Stephanie Strianse, Sade Dunn, Joseph Hackett, Alaysia Black Hackett, Marcus Mason
Abstract:
The Energy Model of Cumulative Grief was created in 2018. The Energy Model of Cumulative Grief utilizes historic models of grief stage theories. The innovative model is additionally unique due to its focus on cultural responsiveness. The Energy Model of Cumulative Grief helps to train practitioners who work with clients dealing with grief and loss. This paper assists in introducing the world to this innovative model and exploring how this model positively impacted a convenience sample of 140 practitioners and individuals experiencing grief and loss. Respondents participated in Webinars provided by the National Grief and Loss Center of America (NGLCA). Participants in this cross-sectional research design study completed one of three Grief and Loss Surveys created by the Grief and Loss Centers of America. Data analysis for this study was conducted via SPSS and Survey Hero to examine survey results for respondents. Results indicate that the Energy Model of Cumulative Grief was an effective resource for participants in addressing grief and loss. The majority of participants found the Webinars to be helpful and a conduit to providing them with higher levels of hope. The findings suggest that using The Energy Model of Cumulative Grief is effective in providing culturally responsive grief and loss resources to practitioners and clients. There are far reaching implications with the use of technology to provide hope to those suffering from grief and loss worldwide through The Energy Model of Cumulative Grief.Keywords: grief, loss, grief energy, grieving brain
Procedia PDF Downloads 8535600 Multiscale Modelling of Citrus Black Spot Transmission Dynamics along the Pre-Harvest Supply Chain
Authors: Muleya Nqobile, Winston Garira
Abstract:
We presented a compartmental deterministic multi-scale model which encompass internal plant defensive mechanism and pathogen interaction, then we consider nesting the model into the epidemiological model. The objective was to improve our understanding of the transmission dynamics of within host and between host of Guignardia citricapa Kiely. The inflow of infected class was scaled down to individual level while the outflow was scaled up to average population level. Conceptual model and mathematical model were constructed to display a theoretical framework which can be used for predicting or identify disease pattern.Keywords: epidemiological model, mathematical modelling, multi-scale modelling, immunological model
Procedia PDF Downloads 45935599 Forced Immigration to Turkey: The Socio-Spatial Impacts of Syrian Immigrants on Turkish Cities
Authors: Tolga Levent
Abstract:
Throughout the past few decades, forced immigration has been a significant problem for many developing countries. Turkey is one of those countries, which has experienced lots of forced immigration waves in the Republican era. However, the ongoing forced immigration wave of Syrians started with Syrian Civil War in 2011, is strikingly influential due to its intensity. In six years, approximately 3,4 million Syrians have entered to Turkey and presented high-level spatial concentrations in certain cities proximate to the Syrian border. These concentrations make Syrians and their problems relatively visible, especially in those cities. The problems of Syrians in Turkish cities could be associated with all dimensions of daily lives. Within economical dimension, high rates of Syrian unemployment push them to informal jobs offering very low wages. The financial aids they continuously demand from public authorities trigger anti-Syrian behaviors of local communities. Moreover, their relatively limited social adaptation capacities increase integration problems within social dimension day by day. Even, there are problems related to public health dimension such as the reappearance of certain child's illnesses due to the insufficiency of vaccination of Syrian children. These problems are significant but relatively easy to be prevented by using different types of management strategies and structural policies. However, there are other types of problems -urban problems- emerging with socio-spatial impacts of Syrians on Turkish cities in a very short period of time. There are relatively limited amount of studies about these impacts since they are difficult to be comprehended. The aim of the study, in this respect, is to understand these rapidly-emerging impacts and urban problems resulted from this massive immigration influx and to discuss new qualities of urban planning facing them. In the first part, there is a brief historical consideration of forced immigration waves in Turkey. These waves are important to make comparison with the ongoing immigration wave and to understand its significance. The second part is about quantitative and qualitative analyses of the spatial existence of Syrian immigrants in the city of Mersin, as an example of cities where Syrians are highly concentrated. By using official data from public authorities, quantitative statistical analyses are made to detect spatial concentrations of Syrians at neighborhood level. As methods of qualitative research, observations and in-depth interviews are used to define socio-spatial impacts of Syrians. The main results show that there emerges 'cities in cities' though sharp socio-spatial segregations which change density surfaces; produce unforeseen land-use patterns; result in inadequacies of public services and create degradations/deteriorations of urban environments occupied by Syrians. All these problems are significant; however, Turkish planning system does not have a capacity to cope with them. In the final part, there is a discussion about new qualities of urban planning facing these impacts and urban problems. The main point of discussion is the possibility of resilient urban planning under the conditions of uncertainty and unpredictability fostered by immigration crisis. Such a resilient planning approach might provide an option for countries aiming to cope with negative socio-spatial impacts of massive immigration influxes.Keywords: cities, forced immigration, Syrians, urban planning
Procedia PDF Downloads 25535598 Air Pollution: The Journey from Single Particle Characterization to in vitro Fate
Authors: S. Potgieter-Vermaak, N. Bain, A. Brown, K. Shaw
Abstract:
It is well-known from public news media that air pollution is a health hazard and is responsible for early deaths. The quantification of the relationship between air quality and health is a probing question not easily answered. It is known that airborne particulate matter (APM) <2.5µm deposits in the tracheal and alveoli zones and our research probes the possibility of quantifying pulmonary injury by linking reactive oxygen species (ROS) in these particles to DNA damage. Currently, APM mass concentration is linked to early deaths and limited studies probe the influence of other properties on human health. To predict the full extent and type of impact, particles need to be characterised for chemical composition and structure. APMs are routinely analysed for their bulk composition, but of late analysis on a micro level probing single particle character, using micro-analytical techniques, are considered. The latter, single particle analysis (SPA), permits one to obtain detailed information on chemical character from nano- to micron-sized particles. This paper aims to provide a snapshot of studies using data obtained from chemical characterisation and its link with in-vitro studies to inform on personal health risks. For this purpose, two studies will be compared, namely, the bioaccessibility of the inhalable fraction of urban road dust versus total suspended solids (TSP) collected in the same urban environment. The significant influence of metals such as Cu and Fe in TSP on DNA damage is illustrated. The speciation of Hg (determined by SPA) in different urban environments proved to dictate its bioaccessibility in artificial lung fluids rather than its concentration.Keywords: air pollution, human health, in-vitro studies, particulate matter
Procedia PDF Downloads 22535597 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 2935596 An Odyssey to Sustainability: The Urban Archipelago of India
Authors: B. Sudhakara Reddy
Abstract:
This study provides a snapshot of the sustainability of selected Indian cities by employing 70 indicators in four dimensions to develop an overall city sustainability index. In recent years, the concept of ‘urban sustainability’ has become prominent due to its complexity. Urban areas propel growth and at the same time poses a lot of ecological, social and infrastructural problems and risks. In case of developing countries, the high population density of and the continuous in-migration run the highest risk in natural and man-made disasters. These issues combined with the inability of policy makers in providing basic services makes the cities unsustainable. To assess whether any given policy is moving towards or against urban sustainability it is necessary to consider the relationships among its various dimensions. Hence, in recent years, while preparing the sustainability index, an integral approach involving indicators of different dimensions such as ‘economic’, ‘environmental’ and 'social' is being used. It is also important for urban planners, social analysts and other related institutions to identify and understand the relationships in this complex system. The objective of the paper is to develop a city performance index (CPI) to measure and evaluate the urban regions in terms of sustainable performances. The objectives include: i) Objective assessment of a city’s performance, ii) setting achievable goals iii) prioritise relevant indicators for improvement, iv) learning from leaders, iv) assessment of the effectiveness of programmes that results in achieving high indicator values, v) Strengthening of stakeholder participation. Using the benchmark approach, a conceptual framework is developed for evaluating 25 Indian cities. We develop City Sustainability index (CSI) in order to rank cities according to their level of sustainability. The CSI is composed of four dimensions: Economic, Environment, Social, and Institutional. Each dimension is further composed of multiple indicators: (1) Economic that considers growth, access to electricity, and telephone availability; (2) environmental that includes waste water treatment, carbon emissions, (3) social that includes, equity, infant mortality, and 4) institutional that includes, voting share of population, urban regeneration policies. The CSI, consisting of four dimensions disaggregate into 12 categories and ultimately into 70 indicators. The data are obtained from public and non-governmental organizations, and also from city officials and experts. By ranking a sample of diverse cities on a set of specific dimensions the study can serve as a baseline of current conditions and a marker for referencing future results. The benchmarks and indices presented in the study provide a unique resource for the government and the city authorities to learn about the positive and negative attributes of a city and prepare plans for a sustainable urban development. As a result of our conceptual framework, the set of criteria we suggest is somewhat different to any already in the literature. The scope of our analysis is intended to be broad. Although illustrated with specific examples, it should be apparent that the principles identified are relevant to any monitoring that is used to inform decisions involving decision variables. These indicators are policy-relevant and, hence they are useful tool for decision-makers and researchers.Keywords: benchmark, city, indicator, performance, sustainability
Procedia PDF Downloads 26935595 Multivariate Control Chart to Determine Efficiency Measurements in Industrial Processes
Authors: J. J. Vargas, N. Prieto, L. A. Toro
Abstract:
Control charts are commonly used to monitor processes involving either variable or attribute of quality characteristics and determining the control limits as a critical task for quality engineers to improve the processes. Nonetheless, in some applications it is necessary to include an estimation of efficiency. In this paper, the ability to define the efficiency of an industrial process was added to a control chart by means of incorporating a data envelopment analysis (DEA) approach. In depth, a Bayesian estimation was performed to calculate the posterior probability distribution of parameters as means and variance and covariance matrix. This technique allows to analyse the data set without the need of using the hypothetical large sample implied in the problem and to be treated as an approximation to the finite sample distribution. A rejection simulation method was carried out to generate random variables from the parameter functions. Each resulting vector was used by stochastic DEA model during several cycles for establishing the distribution of each efficiency measures for each DMU (decision making units). A control limit was calculated with model obtained and if a condition of a low level efficiency of DMU is presented, system efficiency is out of control. In the efficiency calculated a global optimum was reached, which ensures model reliability.Keywords: data envelopment analysis, DEA, Multivariate control chart, rejection simulation method
Procedia PDF Downloads 37435594 A Study of Variables Affecting on a Quality Assessment of Mathematics Subject in Thailand by Using Value Added Analysis on TIMSS 2011
Authors: Ruangdech Sirikit
Abstract:
The purposes of this research were to study the variables affecting the quality assessment of mathematics subject in Thailand by using value-added analysis on TIMSS 2011. The data used in this research is the secondary data from the 2011 Trends in International Mathematics and Science Study (TIMSS), collected from 6,124 students in 172 schools from Thailand, studying only mathematics subjects. The data were based on 14 assessment tests of knowledge in mathematics. There were 3 steps of data analysis: 1) To analyze descriptive statistics 2) To estimate competency of students from the assessment of their mathematics proficiency by using MULTILOG program; 3) analyze value added in the model of quality assessment using Value-Added Model with Hierarchical Linear Modeling (HLM) and 2 levels of analysis. The research results were as follows: 1. Student level variables that had significant effects on the competency of students at .01 levels were Parental care, Resources at home, Enjoyment of learning mathematics and Extrinsic motivation in learning mathematics. Variable that had significant effects on the competency of students at .05 levels were Education of parents and self-confident in learning mathematics. 2. School level variable that had significant effects on competency of students at .01 levels was Extra large school. Variable that had significant effects on competency of students at .05 levels was medium school.Keywords: quality assessment, value-added model, TIMSS, mathematics, Thailand
Procedia PDF Downloads 28335593 Uncertainty of the Brazilian Earth System Model for Solar Radiation
Authors: Elison Eduardo Jardim Bierhals, Claudineia Brazil, Deivid Pires, Rafael Haag, Elton Gimenez Rossini
Abstract:
This study evaluated the uncertainties involved in the solar radiation projections generated by the Brazilian Earth System Model (BESM) of the Weather and Climate Prediction Center (CPTEC) belonging to Coupled Model Intercomparison Phase 5 (CMIP5), with the aim of identifying efficiency in the projections for solar radiation of said model and in this way establish the viability of its use. Two different scenarios elaborated by Intergovernmental Panel on Climate Change (IPCC) were evaluated: RCP 4.5 (with more optimistic contour conditions) and 8.5 (with more pessimistic initial conditions). The method used to verify the accuracy of the present model was the Nash coefficient and the Statistical bias, as it better represents these atmospheric patterns. The BESM showed a tendency to overestimate the data of solar radiation projections in most regions of the state of Rio Grande do Sul and through the validation methods adopted by this study, BESM did not present a satisfactory accuracy.Keywords: climate changes, projections, solar radiation, uncertainty
Procedia PDF Downloads 25035592 An Empirical Investigation of Mobile Banking Services Adoption in Pakistan
Authors: Aijaz A. Shaikh, Richard Glavee-Geo, Heikki Karjaluoto
Abstract:
Adoption of Information Systems (IS) is receiving increasing attention such that its implications have been closely monitored and studied by the IS management community, industry and professional gatekeepers. Building on previous research regarding the adoption of technology, this paper develops and validates an integrated model of the adoption of mobile banking. The model originates from the Technology Acceptance Model (TAM) and the Theory of Planned Behaviour (TPB). This paper intends to offer a preliminary scrutiny of the antecedents of the adoption of mobile banking services in the context of a developing country. Data was collected from Pakistan. The findings showed that an integrated TAM and TPB model greatly explains the adoption intention of mobile banking; and perceived behavioural control and its antecedents play a significant role in predicting adoption Theoretical and managerial implications of findings are presented and discussed.Keywords: developing country, mobile banking service adoption, technology acceptance model, theory of planned behavior
Procedia PDF Downloads 41935591 Thin-Layer Drying Characteristics and Modelling of Instant Coffee Solution
Authors: Apolinar Picado, Ronald Solís, Rafael Gamero
Abstract:
The thin-layer drying characteristics of instant coffee solution were investigated in a laboratory tunnel dryer. Drying experiments were carried out at three temperatures (80, 100 and 120 °C) and an air velocity of 1.2 m/s. Drying experimental data obtained are fitted to six (6) thin-layer drying models using the non-linear least squares regression analysis. The acceptability of the thin-layer drying model has been based on a value of the correlation coefficient that should be close to one, and low values for root mean square error (RMSE) and chi-square (x²). According to this evaluation, the most suitable model for describing drying process of thin-layer instant coffee solution is the Page model. Further, the effective moisture diffusivity and the activation energy were computed employing the drying experimental data. The effective moisture diffusivity values varied from 1.6133 × 10⁻⁹ to 1.6224 × 10⁻⁹ m²/s over the temperature range studied and the activation energy was estimated to be 162.62 J/mol.Keywords: activation energy, diffusivity, instant coffee, thin-layer models
Procedia PDF Downloads 26235590 The Next Frontier for Mobile Based Augmented Reality: An Evaluation of AR Uptake in India
Authors: K. Krishna Milan Rao, Nelvin Joseph, Praveen Dwarakanath
Abstract:
Augmented and Virtual Realties is quickly becoming a hotbed of activity with millions of dollars being spent on R & D and companies such as Google and Microsoft rushing to stake their claim. Augmented reality (AR) is however marching ahead due to the spread of the ideal AR device – the smartphone. Despite its potential, there remains a deep digital divide between the Developed and Developing Countries. The Technological Acceptance Model (TAM) and Hofstede cultural dimensions also predict the behaviour intention to uptake AR in India will be large. This paper takes a quantified approach by collecting 340 survey responses to AR scenarios and analyzing them through statistics. The Survey responses show that the Intention to Use, Perceived Usefulness and Perceived Enjoyment dimensions are high among the urban population in India. This along with the exponential smartphone indicates that India is on the cusp of a boom in the AR sector.Keywords: mobile augmented reality, technology acceptance model, Hofstede, cultural dimensions, India
Procedia PDF Downloads 25035589 Study of Human Position in Architecture with Contextual Approach
Authors: E. Zarei, M. Bazaei, A. seifi, A. Keshavarzi
Abstract:
Contextuallism has been always the main component of urban science. It not only has great direct and indirect impact on behaviors, events and interactions, but also is one of the basic factors of an urban values and identity. Nowadays there might be some deficiencies in the cities. In the theories of environment designing, humanistic orientations with the focus on culture and cultural variables would enable us to transfer information. To communicate with the context in which human lives, he needs some common memories, understandable symbols and daily activities in that context. The configuration of a place can impact on human’s behaviors. The goal of this research is to review 7 projects in different parts of the world with various usages and some factors such as ‘sense of place’, ‘sense of belonging’ and ‘social and cultural relations’ will be discussed in these projects. The method used for research in this project is descriptive- analytic. Library information and Internet are the main sources of gathering information and the method of reasoning used in this project is inductive. The consequence of this research will be some data in the form of tables that has been extracted from mentioned projects.Keywords: contextuallism with humanistic approach, sense of place, sense of belonging, social and cultural relations
Procedia PDF Downloads 396