Search results for: incorrect asset valuation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 750

Search results for: incorrect asset valuation

30 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach

Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista

Abstract:

The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.

Keywords: depth, deep learning, geovisualisation, satellite images

Procedia PDF Downloads 10
29 Closing the Loop between Building Sustainability and Stakeholder Engagement: Case Study of an Australian University

Authors: Karishma Kashyap, Subha D. Parida

Abstract:

Rapid population growth and urbanization is creating pressure throughout the world. This has a dramatic effect on a lot of elements which include water, food, transportation, energy, infrastructure etc. as few of the key services. Built environment sector is growing concurrently to meet the needs of urbanization. Due to such large scale development of buildings, there is a need for them to be monitored and managed efficiently. Along with appropriate management, climate adaptation is highly crucial as well because buildings are one of the major sources of greenhouse gas emission in their operation phase. Buildings to be adaptive need to provide a triple bottom approach to sustainability i.e., being socially, environmentally and economically sustainable. Hence, in order to deliver these sustainability outcomes, there is a growing understanding and thrive towards switching to green buildings or renovating new ones as per green standards wherever possible. Academic institutions in particular have been following this trend globally. This is highly significant as universities usually have high occupancy rates because they manage a large building portfolio. Also, as universities accommodate the future generation of architects, policy makers etc., they have the potential of setting themselves as a best industry practice model for research and innovation for the rest to follow. Hence their climate adaptation, sustainable growth and performance management becomes highly crucial in order to provide the best services to users. With the objective of evaluating appropriate management mechanisms within academic institutions, a feasibility study was carried out in a recent 5-Star Green Star rated university building (housing the School of Construction) in Victoria (south-eastern state of Australia). The key aim was to understand the behavioral and social aspect of the building users, management and the impact of their relationship on overall building sustainability. A survey was used to understand the building occupant’s response and reactions in terms of their work environment and management. A report was generated based on the survey results complemented with utility and performance data which were then used to evaluate the management structure of the university. Followed by the report, interviews were scheduled with the facility and asset managers in order to understand the approach they use to manage the different buildings in their university campuses (old, new, refurbished), respective building and parameters incorporated in maintaining the Green Star performance. The results aimed at closing the communication and feedback loop within the respective institutions and assist the facility managers to deliver appropriate stakeholder engagement. For the wider design community, analysis of the data highlights the applicability and significance of prioritizing key stakeholders, integrating desired engagement policies within an institution’s management structures and frameworks and their effect on building performance

Keywords: building optimization, green building, post occupancy evaluation, stakeholder engagement

Procedia PDF Downloads 357
28 Hydro Solidarity and Turkey’s Role as a Waterpower in the Middle East: The Peace Water Pipeline Project

Authors: Filippo Verre

Abstract:

This paper explores Turkey’s role as an influential waterpower in the Middle East, emphasizing the Peace Water Pipeline Project (PWPP) as a paradigm of hydro solidarity rather than conventional water diplomacy. Hydro solidarity transcends the strategic and often competitive nature of water diplomacy, highlighting cooperative, inclusive, and mutually beneficial approaches to water resource management. The PWPP, which aimed to transport freshwater from Turkey’s Manavgat River to several water-scarce nations in the Middle East, exemplifies this ethos. By providing a reliable water supply to address the chronic shortages in the region, the project underscored Turkey’s commitment to fostering regional cooperation, stability, and collective well-being through shared water resources. This paper provides an in-depth analysis of the Peace Water Pipeline Project, examining its technical specifications, environmental impact, and political implications. It discusses how the project’s foundation on principles of hydro solidarity could facilitate stronger regional ties, mitigate water-related conflicts, and promote sustainable development. By prioritizing collective benefits over unilateral gains, Turkey’s approach exemplified a transformative model of resource sharing that could inspire similar initiatives globally. This paper argues that the Peace Water Pipeline Project serves as a crucial case study in demonstrating how shared natural resources can be leveraged to build trust, enhance cooperation, and achieve common goals in a geopolitically volatile region. The findings emphasize the importance of adopting hydro solidarity as a guiding principle for future transboundary water projects, showcasing how collaborative water management can play a pivotal role in fostering peace, security, and sustainable development in the Middle East and beyond. This research is based on a mixed methodological approach combining qualitative and quantitative methods. The most relevant qualitative methods will involve Case Studies and Content Analysis. Concretely, the Friendship Dam Project (FDP) between Turkey and Syria will be mentioned to underline the importance of hydro solidarity approaches as opposed to water diplomacy. Analyzing this case aims to identify factors that contribute to successful hydro solidarity agreements, such as effective communication channels, trust-building measures, and adaptive management practices. Concerning Content Analysis, reviewing and analyzing policy documents, treaties, media reports, and public statements will help identify the official narratives and discourses surrounding the PWPP. This method fully comprehends how different stakeholders frame the issues and what solutions they propose. The quantitative methodology used in this research, which complements the qualitative approaches, involves economic valuation, which quantifies the PWPP’s economic impacts on Turkey and the Middle Eastern region. This includes assessing the cost of construction and maintenance and the financial benefits derived from improved water access and reduced conflict. Hydrological modelling will also be used as a quantitative research method. Using hydrological models to simulate the water flow and distribution scenarios helps quantify the pipeline’s potential impacts on water resources. By assessing the sustainability of water extraction and predicting how changes in water availability might affect different regions, these models play a crucial role in this research, shedding light on the impact of transboundary infrastructures on water management.

Keywords: hydro-solidarity, Middle East, transboundary water management, peace water pipeline project, water scarcity

Procedia PDF Downloads 39
27 Determination Optimum Strike Price of FX Option Call Spread with USD/IDR Volatility and Garman–Kohlhagen Model Analysis

Authors: Bangkit Adhi Nugraha, Bambang Suripto

Abstract:

On September 2016 Bank Indonesia (BI) release regulation no.18/18/PBI/2016 that permit bank clients for using the FX option call spread USD/IDR. Basically, this product is a combination between clients buy FX call option (pay premium) and sell FX call option (receive premium) to protect against currency depreciation while also capping the potential upside with cheap premium cost. BI classifies this product as a structured product. The structured product is combination at least two financial instruments, either derivative or non-derivative instruments. The call spread is the first structured product against IDR permitted by BI since 2009 as response the demand increase from Indonesia firms on FX hedging through derivative for protecting market risk their foreign currency asset or liability. The composition of hedging products on Indonesian FX market increase from 35% on 2015 to 40% on 2016, the majority on swap product (FX forward, FX swap, cross currency swap). Swap is formulated by interest rate difference of the two currency pairs. The cost of swap product is 7% for USD/IDR with one year USD/IDR volatility 13%. That cost level makes swap products seem expensive for hedging buyers. Because call spread cost (around 1.5-3%) cheaper than swap, the most Indonesian firms are using NDF FX call spread USD/IDR on offshore with outstanding amount around 10 billion USD. The cheaper cost of call spread is the main advantage for hedging buyers. The problem arises because BI regulation requires the call spread buyer doing the dynamic hedging. That means, if call spread buyer choose strike price 1 and strike price 2 and volatility USD/IDR exchange rate surpass strike price 2, then the call spread buyer must buy another call spread with strike price 1’ (strike price 1’ = strike price 2) and strike price 2’ (strike price 2’ > strike price 1‘). It could make the premium cost of call spread doubled or even more and dismiss the purpose of hedging buyer to find the cheapest hedging cost. It is very crucial for the buyer to choose best optimum strike price before entering into the transaction. To help hedging buyer find the optimum strike price and avoid expensive multiple premium cost, we observe ten years 2005-2015 historical data of USD/IDR volatility to be compared with the price movement of the call spread USD/IDR using Garman–Kohlhagen Model (as a common formula on FX option pricing). We use statistical tools to analysis data correlation, understand nature of call spread price movement over ten years, and determine factors affecting price movement. We select some range of strike price and tenor and calculate the probability of dynamic hedging to occur and how much it’s cost. We found USD/IDR currency pairs is too uncertain and make dynamic hedging riskier and more expensive. We validated this result using one year data and shown small RMS. The study result could be used to understand nature of FX call spread and determine optimum strike price for hedging plan.

Keywords: FX call spread USD/IDR, USD/IDR volatility statistical analysis, Garman–Kohlhagen Model on FX Option USD/IDR, Bank Indonesia Regulation no.18/18/PBI/2016

Procedia PDF Downloads 379
26 Spatial Design Transformation of Mount Merapi's Dwellings Using Diachronic Approach

Authors: Catharina Dwi Astuti Depari, Gregorius Agung Setyonugroho

Abstract:

In concern for human safety, living in disaster-prone areas is twofold: it is profoundly cataclysmic yet perceptibly contributive. This paradox could be identified in Kalitengah Lor Sub-village community who inhabit Mount Merapi’s most hazardous area, putting them to the highest exposure to eruptions’ cataclysmic impacts. After the devastating incident in 2010, through the Action Plan for Rehabilitation and Reconstruction, the National Government with immediate aid from humanitarian agencies initiated a relocation program by establishing nearly 2,613 temporary shelters throughout the mountain’s region. The problem arose as some of the most affected communities including those in Kalitengah Lor Sub-village, persistently refused to relocate. The obnoxious experience of those living in temporary shelters resulted from the program’s failure to support a long-term living was assumed to instigate the rejection. From the psychological standpoint, this phenomenon reflects the emotional bond between the affected communities with their former dwellings. Regarding this, the paper aims to reveal the factors influencing the emotional attachment of Kalitengah Lor community to their former dwellings including the dwellings’ spatial design transformation prior and post the eruption in 2010. The research adopted Likert five scale-questionnaire comprising a wide range of responses from strongly agree to strongly disagree. The responses were then statistically measured, leading to consensus that provides bases for further interpretations toward the local’s characteristics. Using purposive unit sampling technique, 50 respondents from 217 local households were randomly selected. Questions in the questionnaire were developed with concerns on the aspects of place attachment concept: affection, cognitive, behavior, and perception. Combined with quantitative method, the research adopted diachronic method which was aimed to analyze the spatial design transformation of each dwelling in relation to the inhabitant’s daily activities and personal preferences. The research found that access to natural resources like sand mining, agricultural farms and wood forests, social relationship and physical proximity from house to personal asset like cattle shed, are the dominant factors encouraging the locals to emotionally attached to their former dwellings. Consequently, each dwelling’s spatial design is suffered from changes in which the current house is typically larger in dimension and the bathroom is replaced by public toilet located outside the house’s backyard. Relatively unchanged, the cattle shed is still located in front of the house, the continuous visual relationship, particularly between the living and family room, is maintained, as well as the main orientation of the house towards the local street.

Keywords: diachronic method, former dwellings, local’s characteristics, place attachment, spatial design transformation

Procedia PDF Downloads 167
25 Adopting a New Policy in Maritime Law for Protecting Ship Mortgagees Against Maritime Liens

Authors: Mojtaba Eshraghi Arani

Abstract:

Ship financing is the vital element in the development of shipping industry because while the ship constitutes the owners’ main asset, she is considered a reliable security in the financiers’ viewpoint as well. However, it is most probable that a financier who has accepted a ship as security will face many creditors who are privileged and rank before him for collecting, out of the ship, the money that they are owed. In fact, according to the current rule of maritime law, which was established by “Convention Internationale pour l’Unification de Certaines Règles Relatives aux Privilèges et Hypothèques Maritimes, Brussels, 10 April 1926”, the mortgages, hypotheques, and other charges on vessels rank after several secured claims referred to as “maritime liens”. Such maritime liens are an exhaustive list of claims including but not limited to “expenses incurred in the common interest of the creditors to preserve the vessel or to procure its sale and the distribution of the proceeds of sale”, “tonnage dues, light or harbour dues, and other public taxes and charges of the same character”, “claims arising out of the contract of engagement of the master, crew and other persons hired on board”, “remuneration for assistance and salvage”, “the contribution of the vessel in general average”, “indemnities for collision or other damage caused to works forming part of harbours, docks, etc,” “indemnities for personal injury to passengers or crew or for loss of or damage to cargo”, “claims resulting form contracts entered into or acts done by the master”. The same rule survived with only some minor change in the categories of maritime liens in the substitute conventions 1967 and 1993. The status que in maritime law have always been considered as a major obstacle to the development of shipping market and has inevitably led to increase in the interest rates and other related costs of ship financing. It seems that the national and international policy makers have yet to change their mind being worried about the deviation from the old marine traditions. However, it is crystal clear that the continuation of status que will harm, to a great extent, the shipowners and, consequently, the international merchants as a whole. It is argued in this article that the raison d'être for many categories of maritime liens cease to exist anymore, in view of which, the international community has to recognize only a minimum category of maritime liens which are created in the common interests of all creditors; to this effect, only two category of “compensation due for the salvage of ship” and “extraordinary expenses indispensable for the preservation of the ship” can be declared as taking priority over the mortgagee rights, in anology with the Geneva Convention on the International Recognition of Rights in Aircrafts (1948). A qualitative method with the concept of interpretation of data collection has been used in this manuscript. The source of the data is the analysis of international conventions and domestic laws.

Keywords: ship finance, mortgage, maritime liens, brussels convenion, geneva convention 1948

Procedia PDF Downloads 72
24 Geographic Information Systems and a Breath of Opportunities for Supply Chain Management: Results from a Systematic Literature Review

Authors: Anastasia Tsakiridi

Abstract:

Geographic information systems (GIS) have been utilized in numerous spatial problems, such as site research, land suitability, and demographic analysis. Besides, GIS has been applied in scientific fields like geography, health, and economics. In business studies, GIS has been used to provide insights and spatial perspectives in demographic trends, spending indicators, and network analysis. To date, the information regarding the available usages of GIS in supply chain management (SCM) and how these analyses can benefit businesses is limited. A systematic literature review (SLR) of the last 5-year peer-reviewed academic literature was conducted, aiming to explore the existing usages of GIS in SCM. The searches were performed in 3 databases (Web of Science, ProQuest, and Business Source Premier) and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology. The analysis resulted in 79 papers. The results indicate that the existing GIS applications used in SCM were in the following domains: a) network/ transportation analysis (in 53 of the papers), b) location – allocation site search/ selection (multiple-criteria decision analysis) (in 45 papers), c) spatial analysis (demographic or physical) (in 34 papers), d) combination of GIS and supply chain/network optimization tools (in 32 papers), and e) visualization/ monitoring or building information modeling applications (in 8 papers). An additional categorization of the literature was conducted by examining the usage of GIS in the supply chain (SC) by the business sectors, as indicated by the volume of the papers. The results showed that GIS is mainly being applied in the SC of the biomass biofuel/wood industry (33 papers). Other industries that are currently utilizing GIS in their SC were the logistics industry (22 papers), the humanitarian/emergency/health care sector (10 papers), the food/agro-industry sector (5 papers), the petroleum/ coal/ shale gas sector (3 papers), the faecal sludge sector (2 papers), the recycle and product footprint industry (2 papers), and the construction sector (2 papers). The results were also presented by the geography of the included studies and the GIS software used to provide critical business insights and suggestions for future research. The results showed that research case studies of GIS in SCM were conducted in 26 countries (mainly in the USA) and that the most prominent GIS software provider was the Environmental Systems Research Institute’s ArcGIS (in 51 of the papers). This study is a systematic literature review of the usage of GIS in SCM. The results showed that the GIS capabilities could offer substantial benefits in SCM decision-making by providing key insights to cost minimization, supplier selection, facility location, SC network configuration, and asset management. However, as presented in the results, only eight industries/sectors are currently using GIS in their SCM activities. These findings may offer essential tools to SC managers who seek to optimize the SC activities and/or minimize logistic costs and to consultants and business owners that want to make strategic SC decisions. Furthermore, the findings may be of interest to researchers aiming to investigate unexplored research areas where GIS may improve SCM.

Keywords: supply chain management, logistics, systematic literature review, GIS

Procedia PDF Downloads 142
23 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions

Authors: Vikrant Gupta, Amrit Goswami

Abstract:

The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.

Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition

Procedia PDF Downloads 136
22 Co-Movement between Financial Assets: An Empirical Study on Effects of the Depreciation of Yen on Asia Markets

Authors: Yih-Wenn Laih

Abstract:

In recent times, the dependence and co-movement among international financial markets have become stronger than in the past, as evidenced by commentaries in the news media and the financial sections of newspapers. Studying the co-movement between returns in financial markets is an important issue for portfolio management and risk management. The realization of co-movement helps investors to identify the opportunities for international portfolio management in terms of asset allocation and pricing. Since the election of the new Prime Minister, Shinzo Abe, in November 2012, the yen has weakened against the US dollar from the 80 to the 120 level. The policies, known as “Abenomics,” are to encourage private investment through a more aggressive mix of monetary and fiscal policy. Given the close economic relations and competitions among Asia markets, it is interesting to discover the co-movement relations, affected by the depreciation of yen, between stock market of Japan and 5 major Asia stock markets, including China, Hong Kong, Korea, Singapore, and Taiwan. Specifically, we devote ourselves to measure the co-movement of stock markets between Japan and each one of the 5 Asia stock markets in terms of rank correlation coefficients. To compute the coefficients, return series of each stock market is first fitted by a skewed-t GARCH (generalized autoregressive conditional heteroscedasticity) model. Secondly, to measure the dependence structure between matched stock markets, we employ the symmetrized Joe-Clayton (SJC) copula to calculate the probability density function of paired skewed-t distributions. The joint probability density function is then utilized as the scoring scheme to optimize the sequence alignment by dynamic programming method. Finally, we compute the rank correlation coefficients (Kendall's  and Spearman's ) between matched stock markets based on their aligned sequences. We collect empirical data of 6 stock indexes from Taiwan Economic Journal. The data is sampled at a daily frequency covering the period from January 1, 2013 to July 31, 2015. The empirical distributions of returns indicate fatter tails than the normal distribution. Therefore, the skewed-t distribution and SJC copula are appropriate for characterizing the data. According to the computed Kendall’s τ, Korea has the strongest co-movement relation with Japan, followed by Taiwan, China, and Singapore; the weakest is Hong Kong. On the other hand, the Spearman’s ρ reveals that the strength of co-movement between markets with Japan in decreasing order are Korea, China, Taiwan, Singapore, and Hong Kong. We explore the effects of “Abenomics” on Asia stock markets by measuring the co-movement relation between Japan and five major Asia stock markets in terms of rank correlation coefficients. The matched markets are aligned by a hybrid method consisting of GARCH, copula and sequence alignment. Empirical experiments indicate that Korea has the strongest co-movement relation with Japan. The strength of China and Taiwan are better than Singapore. The Hong Kong market has the weakest co-movement relation with Japan.

Keywords: co-movement, depreciation of Yen, rank correlation, stock market

Procedia PDF Downloads 231
21 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory

Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker

Abstract:

In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.

Keywords: chemical analysis, concrete, LIBS, spectroscopy

Procedia PDF Downloads 105
20 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing

Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan

Abstract:

This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.

Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium

Procedia PDF Downloads 297
19 A Comparative Study on South-East Asian Leading Container Ports: Jawaharlal Nehru Port Trust, Chennai, Singapore, Dubai, and Colombo Ports

Authors: Jonardan Koner, Avinash Purandare

Abstract:

In today’s globalized world international business is a very key area for the country's growth. Some of the strategic areas for holding up a country’s international business to grow are in the areas of connecting Ports, Road Network, and Rail Network. India’s International Business is booming both in Exports as well as Imports. Ports play a very central part in the growth of international trade and ensuring competitive ports is of critical importance. India has a long coastline which is a big asset for the country as it has given the opportunity for development of a large number of major and minor ports which will contribute to the maritime trades’ development. The National Economic Development of India requires a well-functioning seaport system. To know the comparative strength of Indian ports over South-east Asian similar ports, the study is considering the objectives of (I) to identify the key parameters of an international mega container port, (II) to compare the five selected container ports (JNPT, Chennai, Singapore, Dubai, and Colombo Ports) according to user of the ports and iii) to measure the growth of selected five container ports’ throughput over time and their comparison. The study is based on both primary and secondary databases. The linear time trend analysis is done to show the trend in quantum of exports, imports and total goods/services handled by individual ports over the years. The comparative trend analysis is done for the selected five ports of cargo traffic handled in terms of Tonnage (weight) and number of containers (TEU’s). The comparative trend analysis is done between containerized and non-containerized cargo traffic in the five selected five ports. The primary data analysis is done comprising of comparative analysis of factor ratings through bar diagrams, statistical inference of factor ratings for the selected five ports, consolidated comparative line charts of factor rating for the selected five ports, consolidated comparative bar charts of factor ratings of the selected five ports and the distribution of ratings (frequency terms). The linear regression model is used to forecast the container capacities required for JNPT Port and Chennai Port by the year 2030. Multiple regression analysis is carried out to measure the impact of selected 34 explanatory variables on the ‘Overall Performance of the Port’ for each of the selected five ports. The research outcome is of high significance to the stakeholders of Indian container handling ports. Indian container port of JNPT and Chennai are benchmarked against international ports such as Singapore, Dubai, and Colombo Ports which are the competing ports in the neighbouring region. The study has analysed the feedback ratings for the selected 35 factors regarding physical infrastructure and services rendered to the port users. This feedback would provide valuable data for carrying out improvements in the facilities provided to the port users. These installations would help the ports’ users to carry out their work in more efficient manner.

Keywords: throughput, twenty equivalent units, TEUs, cargo traffic, shipping lines, freight forwarders

Procedia PDF Downloads 131
18 Barriers to Business Model Innovation in the Agri-Food Industry

Authors: Pia Ulvenblad, Henrik Barth, Jennie Cederholm BjöRklund, Maya Hoveskog, Per-Ola Ulvenblad

Abstract:

The importance of business model innovation (BMI) is widely recognized. This is also valid for firms in the agri-food industry, closely connected to global challenges. Worldwide food production will have to increase 70% by 2050 and the United Nations’ sustainable development goals prioritize research and innovation on food security and sustainable agriculture. The firms of the agri-food industry have opportunities to increase their competitive advantage through BMI. However, the process of BMI is complex and the implementation of new business models is associated with high degree of risk and failure. Thus, managers from all industries and scholars need to better understand how to address this complexity. Therefore, the research presented in this paper (i) explores different categories of barriers in research literature on business models in the agri-food industry, and (ii) illustrates categories of barriers with empirical cases. This study is addressing the rather limited understanding on barriers for BMI in the agri-food industry, through a systematic literature review (SLR) of 570 peer-reviewed journal articles that contained a combination of ‘BM’ or ‘BMI’ with agriculture-related and food-related terms (e.g. ‘agri-food sector’) published in the period 1990-2014. The study classifies the barriers in several categories and illustrates the identified barriers with ten empirical cases. Findings from the literature review show that barriers are mainly identified as outcomes. It can be assumed that a perceived barrier to growth can often be initially exaggerated or underestimated before being challenged by appropriate measures or courses of action. What may be considered by the public mind to be a barrier could in reality be very different from an actual barrier that needs to be challenged. One way of addressing barriers to growth is to define barriers according to their origin (internal/external) and nature (tangible/intangible). The framework encompasses barriers related to the firm (internal addressing in-house conditions) or to the industrial or national levels (external addressing environmental conditions). Tangible barriers can include asset shortages in the area of equipment or facilities, while human resources deficiencies or negative willingness towards growth are examples of intangible barriers. Our findings are consistent with previous research on barriers for BMI that has identified human factors barriers (individuals’ attitudes, histories, etc.); contextual barriers related to company and industry settings; and more abstract barriers (government regulations, value chain position, and weather). However, human factor barriers – and opportunities - related to family-owned businesses with idealistic values and attitudes and owning the real estate where the business is situated, are more frequent in the agri-food industry than other industries. This paper contributes by generating a classification of the barriers for BMI as well as illustrating them with empirical cases. We argue that internal barriers such as human factors barriers; values and attitudes are crucial to overcome in order to develop BMI. However, they can be as hard to overcome as for example institutional barriers such as governments’ regulations. Implications for research and practice are to focus on cognitive barriers and to develop the BMI capability of the owners and managers of agri-industry firms.

Keywords: agri-food, barriers, business model, innovation

Procedia PDF Downloads 233
17 The Impact of Trade on Stock Market Integration of Emerging Markets

Authors: Anna M. Pretorius

Abstract:

The emerging markets category for portfolio investment was introduced in 1986 in an attempt to promote capital market development in less developed countries. Investors traditionally diversified their portfolios by investing in different developed markets. However, high growth opportunities forced investors to consider emerging markets as well. Examples include the rapid growth of the “Asian Tigers” during the 1980s, growth in Latin America during the 1990s and the increased interest in emerging markets during the global financial crisis. As such, portfolio flows to emerging markets have increased substantially. In 2002 7% of all equity allocations from advanced economies went to emerging markets; this increased to 20% in 2012. The stronger links between advanced and emerging markets led to increased synchronization of asset price movements. This increased level of stock market integration for emerging markets is confirmed by various empirical studies. Against the background of increased interest in emerging market assets and the increasing level of integration of emerging markets, this paper focuses on the determinants of stock market integration of emerging market countries. Various studies have linked the level of financial market integration with specific economic variables. These variables include: economic growth, local inflation, trade openness, local investment, budget surplus/ deficit, market capitalization, domestic bank credit, domestic institutional and legal environment and world interest rates. The aim of this study is to empirically investigate to what extent trade-related determinants have an impact on stock market integration. The panel data sample include data of 16 emerging market countries: Brazil, Chile, China, Colombia, Czech Republic, Hungary, India, Malaysia, Pakistan, Peru, Philippines, Poland, Russian Federation, South Africa, Thailand and Turkey for the period 1998-2011. The integration variable for each emerging stock market is calculated as the explanatory power of a multi-factor model. These factors are extracted from a large panel of global stock market returns. Trade related explanatory variables include: exports as percentage of GDP, imports as percentage of GDP and total trade as percentage of GDP. Other macroeconomic indicators – such as market capitalisation, the size of the budget deficit and the effectiveness of the regulation of the securities exchange – are included in the regressions as control variables. An initial analysis on a sample of developed stock markets could not identify any significant determinants of stock market integration. Thus the macroeconomic variables identified in the literature are much more significant in explaining stock market integration of emerging markets than stock market integration of developed markets. The three trade variables are all statistically significant at a 5% level. The market capitalisation variable is also significant while the regulation variable is only marginally significant. The global financial crisis has highlighted the urgency to better understand the link between the financial and real sectors of the economy. This paper comes to the important finding that, apart from the level of market capitalisation (as financial indicator), trade (representative of the real economy) is a significant determinant of stock market integration of countries not yet classified as developed economies.

Keywords: emerging markets, financial market integration, panel data, trade

Procedia PDF Downloads 306
16 Video Club as a Pedagogical Tool to Shift Teachers’ Image of the Child

Authors: Allison Tucker, Carolyn Clarke, Erin Keith

Abstract:

Introduction: In education, the determination to uncover privileged practices requires critical reflection to be placed at the center of both pre-service and in-service teacher education. Confronting deficit thinking about children’s abilities and shifting to holding an image of the child as capable and competent is necessary for teachers to engage in responsive pedagogy that meets children where they are in their learning and builds on strengths. This paper explores the ways in which early elementary teachers' perceptions of the assets of children might shift through the pedagogical use of video clubs. Video club is a pedagogical practice whereby teachers record and view short videos with the intended purpose of deepening their practices. The use of video club as a learning tool has been an extensively documented practice. In this study, a video club is used to watch short recordings of playing children to identify the assets of their students. Methodology: The study on which this paper is based asks the question: What are the ways in which teachers’ image of the child and teaching practices evolve through the use of video club focused on the strengths of children demonstrated during play? Using critical reflection, it aims to identify and describe participants’ experiences of examining their personally held image of the child through the pedagogical tool video club, and how that image influences their practices, specifically in implementing play pedagogy. Teachers enrolled in a graduate-level play pedagogy course record and watch videos of their own students as a means to notice and reflect on the learning that happens during play. Using a co-constructed viewing protocol, teachers identify student strengths and consider their pedagogical responses. Video club provides a framework for teachers to critically reflect in action, return to the video to rewatch the children or themselves and discuss their noticings with colleagues. Critical reflection occurs when there is focused attention on identifying the ways in which actions perpetuate or challenge issues of inherent power in education. When the image of the child held by the teacher is from a deficit position and is influenced by hegemonic dimensions of practice, critical reflection is essential in naming and addressing power imbalances, biases, and practices that are harmful to children and become barriers to their thriving. The data is comprised of teacher reflections, analyzed using phenomenology. Phenomenology seeks to understand and appreciate how individuals make sense of their experiences. Teacher reflections are individually read, and researchers determine pools of meaning. Categories are identified by each researcher, after which commonalities are named through a recursive process of returning to the data until no more themes emerge or saturation is reached. Findings: The final analysis and interpretation of the data are forthcoming. However, emergent analysis of the data collected using teacher reflections reveals the ways in which the use of video club grew teachers’ awareness of their image of the child. It shows video club as a promising pedagogical tool when used with in-service teachers to prompt opportunities for play and to challenge deficit thinking about children and their abilities to thrive in learning.

Keywords: asset-based teaching, critical reflection, image of the child, video club

Procedia PDF Downloads 105
15 Measuring Green Growth Indicators: Implication for Policy

Authors: Hanee Ryu

Abstract:

The former president Lee Myung-bak's administration of Korea presented “green growth” as a catchphrase from 2008. He declared “low-carbon, green growth” the nation's vision for the next decade according to United Nation Framework on Climate Change. The government designed omnidirectional policy for low-carbon and green growth with concentrating all effort of departments. The structural change was expected because this slogan is the identity of the government, which is strongly driven with the whole department. After his administration ends, the purpose of this paper is to quantify the policy effect and to compare with the value of the other OECD countries. The major target values under direct policy objectives were suggested, but it could not capture the entire landscape on which the policy makes changes. This paper figures out the policy impacts through comparing the value of ex-ante between the one of ex-post. Furthermore, each index level of Korea’s low-carbon and green growth comparing with the value of the other OECD countries. To measure the policy effect, indicators international organizations have developed are considered. Environmental Sustainable Index (ESI) and Environmental Performance Index (EPI) have been developed by Yale University’s Center for Environmental Law and Policy and Columbia University’s Center for International Earth Science Information Network in collaboration with the World Economic Forum and Joint Research Center of European Commission. It has been widely used to assess the level of natural resource endowments, pollution level, environmental management efforts and society’s capacity to improve its environmental performance over time. Recently OCED publish the Green Growth Indicator for monitoring progress towards green growth based on internationally comparable data. They build up the conceptual framework and select indicators according to well specified criteria: economic activities, natural asset base, environmental dimension of quality of life and economic opportunities and policy response. It considers the socio-economic context and reflects the characteristic of growth. Some selected indicators are used for measuring the level of changes the green growth policies have induced in this paper. As results, the CO2 productivity and energy productivity show trends of declination. It means that policy intended industry structure shift for achieving carbon emission target affects weakly in the short-term. Increasing green technologies patents might result from the investment of previous period. The increasing of official development aids which can be immediately embarked by political decision with no time lag present only in 2008-2009. It means international collaboration and investment to developing countries via ODA has not succeeded since the initial stage of his administration. The green growth framework makes the public expect structural change, but it shows sporadic effect. It needs organization to manage it in terms of the long-range perspectives. Energy, climate change and green growth are not the issue to be handled in the one period of the administration. The policy mechanism to transfer cost problem to value creation should be developed consistently.

Keywords: comparing ex-ante between ex-post indicator, green growth indicator, implication for green growth policy, measuring policy effect

Procedia PDF Downloads 448
14 Neurodiversity in Post Graduate Medical Education: A Rapid Solution to Faculty Development

Authors: Sana Fatima, Paul Sadler, Jon Cooper, David Mendel, Ayesha Jameel

Abstract:

Background: Neurodiversity refers to intrinsic differences between human minds and encompasses dyspraxia, dyslexia, attention deficit hyperactivity disorder, dyscalculia, autism spectrum disorder, and Tourette syndrome. There is increasing recognition of neurodiversity in relation to disability/diversity in medical education and the associated impact on training, career progression, and personal and professional wellbeing. In addition, documented and anecdotal evidence suggests that medical educators and training providers in all four nations (UK) are increasingly concerned about understanding neurodiversity and identifying and providing support for neurodivergent trainees. Summary of Work: A national Neurodiversity Task and Finish group were established to survey Health Education England local office Professional Support teams about insights into infrastructure, training for educators, triggers for assessment, resources, and intervention protocols. This group drew from educational leadership, professional and personal neurodiverse expertise, occupational medicine, employer human resource, and trainees. An online, exploratory survey was conducted to gather insights from supervisors and trainers across England using the Professional Support Units' platform. Summary of Results: This survey highlighted marked heterogeneity in the identification, assessment, and approaches to support and management of neurodivergent trainees and highlighted a 'deficit' approach to neurodiversity. It also demonstrated a paucity of educational and protocol resources for educators and supervisors in supporting neurodivergent trainees. Discussions and Conclusions: In phase one, we focused on faculty development. An educational repository for all supervising trainees using a thematic approach was formalised. This was guided by our survey findings specific for neurodiversity and took a triple 'A' approach: awareness, assessment, and action. This is further supported by video material incorporating stories in training as well as mobile workshops for trainers for more immersive learning. The subtle theme from both the survey and Task and finish group suggested a move away from deficit-focused methods toward a positive holistic, interdisciplinary approach within a biopsychosocial framework. Contributions: 1. Faculty Knowledge and basic understanding of neurodiversity are key to supporting trainees with known or underlying Neurodiverse conditions. This is further complicated by challenges around non-disclosure, varied presentations, stigma, and intersectionality. 2. There is national (and international) inconsistency in the approach to how trainees are managed once a neurodiverse condition is suspected or diagnosed. 3. A carefully constituted and focussed Task and Finish group can rapidly identify national inconsistencies in neurodiversity and implement rapid educational interventions. 4. Nuanced findings from surveys and discussion can reframe the approach to neurodiversity; from a medical model to a more comprehensive, asset-based, biopsychosocial model of support, fostering a cultural shift, accepting 'diversity' in all its manifestations, visible and hidden.

Keywords: neurodiversity, professional support, human considerations, workplace wellbeing

Procedia PDF Downloads 91
13 Design and Implementation of a Hardened Cryptographic Coprocessor with 128-bit RISC-V Core

Authors: Yashas Bedre Raghavendra, Pim Vullers

Abstract:

This study presents the design and implementation of an abstract cryptographic coprocessor, leveraging AMBA(Advanced Microcontroller Bus Architecture) protocols - APB (Advanced Peripheral Bus) and AHB (Advanced High-performance Bus), to enable seamless integration with the main CPU(Central processing unit) and enhance the coprocessor’s algorithm flexibility. The primary objective is to create a versatile coprocessor that can execute various cryptographic algorithms, including ECC(Elliptic-curve cryptography), RSA(Rivest–Shamir–Adleman), and AES (Advanced Encryption Standard) while providing a robust and secure solution for modern secure embedded systems. To achieve this goal, the coprocessor is equipped with a tightly coupled memory (TCM) for rapid data access during cryptographic operations. The TCM is placed within the coprocessor, ensuring quick retrieval of critical data and optimizing overall performance. Additionally, the program memory is positioned outside the coprocessor, allowing for easy updates and reconfiguration, which enhances adaptability to future algorithm implementations. Direct links are employed instead of DMA(Direct memory access) for data transfer, ensuring faster communication and reducing complexity. The AMBA-based communication architecture facilitates seamless interaction between the coprocessor and the main CPU, streamlining data flow and ensuring efficient utilization of system resources. The abstract nature of the coprocessor allows for easy integration of new cryptographic algorithms in the future. As the security landscape continues to evolve, the coprocessor can adapt and incorporate emerging algorithms, making it a future-proof solution for cryptographic processing. Furthermore, this study explores the addition of custom instructions into RISC-V ISE (Instruction Set Extension) to enhance cryptographic operations. By incorporating custom instructions specifically tailored for cryptographic algorithms, the coprocessor achieves higher efficiency and reduced cycles per instruction (CPI) compared to traditional instruction sets. The adoption of RISC-V 128-bit architecture significantly reduces the total number of instructions required for complex cryptographic tasks, leading to faster execution times and improved overall performance. Comparisons are made with 32-bit and 64-bit architectures, highlighting the advantages of the 128-bit architecture in terms of reduced instruction count and CPI. In conclusion, the abstract cryptographic coprocessor presented in this study offers significant advantages in terms of algorithm flexibility, security, and integration with the main CPU. By leveraging AMBA protocols and employing direct links for data transfer, the coprocessor achieves high-performance cryptographic operations without compromising system efficiency. With its TCM and external program memory, the coprocessor is capable of securely executing a wide range of cryptographic algorithms. This versatility and adaptability, coupled with the benefits of custom instructions and the 128-bit architecture, make it an invaluable asset for secure embedded systems, meeting the demands of modern cryptographic applications.

Keywords: abstract cryptographic coprocessor, AMBA protocols, ECC, RSA, AES, tightly coupled memory, secure embedded systems, RISC-V ISE, custom instructions, instruction count, cycles per instruction

Procedia PDF Downloads 70
12 Integrating Data Mining within a Strategic Knowledge Management Framework: A Platform for Sustainable Competitive Advantage within the Australian Minerals and Metals Mining Sector

Authors: Sanaz Moayer, Fang Huang, Scott Gardner

Abstract:

In the highly leveraged business world of today, an organisation’s success depends on how it can manage and organize its traditional and intangible assets. In the knowledge-based economy, knowledge as a valuable asset gives enduring capability to firms competing in rapidly shifting global markets. It can be argued that ability to create unique knowledge assets by configuring ICT and human capabilities, will be a defining factor for international competitive advantage in the mid-21st century. The concept of KM is recognized in the strategy literature, and increasingly by senior decision-makers (particularly in large firms which can achieve scalable benefits), as an important vehicle for stimulating innovation and organisational performance in the knowledge economy. This thinking has been evident in professional services and other knowledge intensive industries for over a decade. It highlights the importance of social capital and the value of the intellectual capital embedded in social and professional networks, complementing the traditional focus on creation of intellectual property assets. Despite the growing interest in KM within professional services there has been limited discussion in relation to multinational resource based industries such as mining and petroleum where the focus has been principally on global portfolio optimization with economies of scale, process efficiencies and cost reduction. The Australian minerals and metals mining industry, although traditionally viewed as capital intensive, employs a significant number of knowledge workers notably- engineers, geologists, highly skilled technicians, legal, finance, accounting, ICT and contracts specialists working in projects or functions, representing potential knowledge silos within the organisation. This silo effect arguably inhibits knowledge sharing and retention by disaggregating corporate memory, with increased operational and project continuity risk. It also may limit the potential for process, product, and service innovation. In this paper the strategic application of knowledge management incorporating contemporary ICT platforms and data mining practices is explored as an important enabler for knowledge discovery, reduction of risk, and retention of corporate knowledge in resource based industries. With reference to the relevant strategy, management, and information systems literature, this paper highlights possible connections (currently undergoing empirical testing), between an Strategic Knowledge Management (SKM) framework incorporating supportive Data Mining (DM) practices and competitive advantage for multinational firms operating within the Australian resource sector. We also propose based on a review of the relevant literature that more effective management of soft and hard systems knowledge is crucial for major Australian firms in all sectors seeking to improve organisational performance through the human and technological capability captured in organisational networks.

Keywords: competitive advantage, data mining, mining organisation, strategic knowledge management

Procedia PDF Downloads 415
11 Understanding New Zealand’s 19th Century Timber Churches: Techniques in Extracting and Applying Underlying Procedural Rules

Authors: Samuel McLennan, Tane Moleta, Andre Brown, Marc Aurel Schnabel

Abstract:

The development of Ecclesiastical buildings within New Zealand has produced some unique design characteristics that take influence from both international styles and local building methods. What this research looks at is how procedural modelling can be used to define such common characteristics and understand how they are shared and developed within different examples of a similar architectural style. This will be achieved through the creation of procedural digital reconstructions of the various timber Gothic Churches built during the 19th century in the city of Wellington, New Zealand. ‘Procedural modelling’ is a digital modelling technique that has been growing in popularity, particularly within the game and film industry, as well as other fields such as industrial design and architecture. Such a design method entails the creation of a parametric ‘ruleset’ that can be easily adjusted to produce many variations of geometry, rather than a single geometry as is typically found in traditional CAD software. Key precedents within this area of digital heritage includes work by Haegler, Müller, and Gool, Nicholas Webb and Andre Brown, and most notably Mark Burry. What these precedents all share is how the forms of the reconstructed architecture have been generated using computational rules and an understanding of the architects’ geometric reasoning. This is also true within this research as Gothic architecture makes use of only a select range of forms (such as the pointed arch) that can be accurately replicated using the same standard geometric techniques originally used by the architect. The methodology of this research involves firstly establishing a sample group of similar buildings, documenting the existing samples, researching any lost samples to find evidence such as architectural plans, photos, and written descriptions, and then culminating all the findings into a single 3D procedural asset within the software ‘Houdini’. The end result will be an adjustable digital model that contains all the architectural components of the sample group, such as the various naves, buttresses, and windows. These components can then be selected and arranged to create visualisations of the sample group. Because timber gothic churches in New Zealand share many details between designs, the created collection of architectural components can also be used to approximate similar designs not included in the sample group, such as designs found beyond the Wellington Region. This creates an initial library of architectural components that can be further expanded on to encapsulate as wide of a sample size as desired. Such a methodology greatly improves upon the efficiency and adjustability of digital modelling compared to current practices found in digital heritage reconstruction. It also gives greater accuracy to speculative design, as a lack of evidence for lost structures can be approximated using components from still existing or better-documented examples. This research will also bring attention to the cultural significance these types of buildings have within the local area, addressing the public’s general unawareness of architectural history that is identified in the Wellington based research ‘Moving Images in Digital Heritage’ by Serdar Aydin et al.

Keywords: digital forensics, digital heritage, gothic architecture, Houdini, procedural modelling

Procedia PDF Downloads 131
10 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets

Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe

Abstract:

Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.

Keywords: biomedical research, genomics, information systems, software

Procedia PDF Downloads 270
9 The Impact of the Macro-Level: Organizational Communication in Undergraduate Medical Education

Authors: Julie M. Novak, Simone K. Brennan, Lacey Brim

Abstract:

Undergraduate medical education (UME) curriculum notably addresses micro-level communications (e.g., patient-provider, intercultural, inter-professional), yet frequently under-examines the role and impact of organizational communication, a more macro-level. Organizational communication, however, functions as foundation and through systemic structures of an organization and thereby serves as hidden curriculum and influences learning experiences and outcomes. Yet, little available research exists fully examining how students experience organizational communication while in medical school. Extant literature and best practices provide insufficient guidance for UME programs, in particular. The purpose of this study was to map and examine current organizational communication systems and processes in a UME program. Employing a phenomenology-grounded and participatory approach, this study sought to understand the organizational communication system from medical students' perspective. The research team consisted of a core team and 13 medical student co-investigators. This research employed multiple methods, including focus groups, individual interviews, and two surveys (one reflective of focus group questions, the other requesting students to submit ‘examples’ of communications). To provide context for student responses, nonstudent participants (faculty, administrators, and staff) were sampled, as they too express concerns about communication. Over 400 students across all cohorts and 17 nonstudents participated. Data were iteratively analyzed and checked for triangulation. Findings reveal the complex nature of organizational communication and student-oriented communications. They reveal program-impactful strengths, weaknesses, gaps, and tensions and speak to the role of organizational communication practices influencing both climate and culture. With regard to communications, students receive multiple, simultaneous communications from multiple sources/channels, both formal (e.g., official email) and informal (e.g., social media). Students identified organizational strengths including the desire to improve student voice, and message frequency. They also identified weaknesses related to over-reliance on emails, numerous platforms with inconsistent utilization, incorrect information, insufficient transparency, assessment/input fatigue, tacit expectations, scheduling/deadlines, responsiveness, and mental health confidentiality concerns. Moreover, they noted gaps related to lack of coordination/organization, ambiguous point-persons, student ‘voice-only’, open communication loops, lack of core centralization and consistency, and mental health bridges. Findings also revealed organizational identity and cultural characteristics as impactful on the medical school experience. Cultural characteristics included program size, diversity, urban setting, student organizations, community-engagement, crisis framing, learning for exams, inefficient bureaucracy, and professionalism. Moreover, they identified system structures that do not always leverage cultural strengths or reduce cultural problematics. Based on the results, opportunities for productive change are identified. These include leadership visibly supporting and enacting overall organizational narratives, making greater efforts in consistently ‘closing the loop’, regularly sharing how student input effects change, employing strategies of crisis communication more often, strengthening communication infrastructure, ensuring structures facilitate effective operations and change efforts, and highlighting change efforts in informational communication. Organizational communication and communications are not soft-skills, or of secondary concern within organizations, rather they are foundational in nature and serve to educate/inform all stakeholders. As primary stakeholders, students and their success directly affect the accomplishment of organizational goals. This study demonstrates how inquiries about how students navigate their educational experience extends research-based knowledge and provides actionable knowledge for the improvement of organizational operations in UME.

Keywords: medical education programs, organizational communication, participatory research, qualitative mixed methods

Procedia PDF Downloads 115
8 Developing a Performance Measurement System for Arts-Based Initiatives: Action Research on Italian Corporate Museums

Authors: Eleonora Carloni, Michela Arnaboldi

Abstract:

In academia, the investigation of the relationship between cultural heritage and corporations is ubiquitous in several fields of studies. In practice corporations are more and more integrating arts and cultural heritage in their strategies for disparate benefits, such as: to foster customer’s purchase intention with authentic and aesthetic experiences, to improve their reputation towards local communities, and to motivate employees with creative thinking. There are diverse forms under which corporations set these artistic interventions, from sponsorships to arts-based training centers for employees, but scholars agree that the maximum expression of this cultural trend are corporate museums, growing in number and relevance. Corporate museums are museum-like settings, hosting artworks of corporations’ history and interests. In academia they have been ascribed as strategic asset and they have been associated with diverse uses for corporations’ benefits, from place for preservation of cultural heritage, to tools for public relations and cultural flagship stores. Previous studies have thus extensively but fragmentally studied the diverse benefits of corporate museum opening to corporations, with a lack of comprehensive approach and a digression on how to evaluate and report corporate museum’s performances. Stepping forward, the present study aims to investigate: 1) what are the key performance measures corporate museums need to report to the associated corporations; 2) how are the key performance measures reported to the concerned corporations. This direction of study is not only suggested as future direction in academia but it has solid basis in practice, aiming to answer to the need of corporate museums’ directors to account for corporate museum’s activities to the concerned corporation. Coherently, at an empirical level the study relies on action research method, whose distinctive feature is to develop practical knowledge through a participatory process. This paper indeed relies on the experience of a collaborative project between the researchers and a set of corporate museums in Italy, aimed at co-developing a performance measurement system. The project involved two steps: a first step, in which researchers derived the potential performance measures from literature along with exploratory interviews; a second step, in which researchers supported the pool of corporate museums’ directors in co-developing a set of key performance indicators for reporting. Preliminary empirical findings show that while scholars insist on corporate museums’ capability to develop networking relations, directors insist on the role of museums as internal supplier of knowledge for innovation goals. Moreover, directors stress museums’ cultural mission and outcomes as potential benefits for corporation, by remarking to include both cultural and business measures in the final tool. In addition, they give relevant attention to the wording used in humanistic terms while struggling to express all measures in economic terms. The paper aims to contribute to corporate museums’ and more broadly to arts-based initiatives’ literature in two directions. Firstly, it elaborates key performance measures with related indicators to report on cultural initiatives for corporations. Secondly, it provides evidence of challenges and practices to handle reporting on these initiatives, because of tensions arising from the co-existence of diverse perspectives, namely arts and business worlds.

Keywords: arts-based initiative, corporate museum, hybrid organization, performance measurement

Procedia PDF Downloads 176
7 Urban Flood Resilience Comprehensive Assessment of "720" Rainstorm in Zhengzhou Based on Multiple Factors

Authors: Meiyan Gao, Zongmin Wang, Haibo Yang, Qiuhua Liang

Abstract:

Under the background of global climate change and rapid development of modern urbanization, the frequency of climate disasters such as extreme precipitation in cities around the world is gradually increasing. In this paper, Hi-PIMS model is used to simulate the "720" flood in Zhengzhou, and the continuous stages of flood resilience are determined with the urban flood stages are divided. The flood resilience curve under the influence of multiple factors were determined and the urban flood toughness was evaluated by combining the results of resilience curves. The flood resilience of urban unit grid was evaluated based on economy, population, road network, hospital distribution and land use type. Firstly, the rainfall data of meteorological stations near Zhengzhou and the remote sensing rainfall data from July 17 to 22, 2021 were collected. The Kriging interpolation method was used to expand the rainfall data of Zhengzhou. According to the rainfall data, the flood process generated by four rainfall events in Zhengzhou was reproduced. Based on the results of the inundation range and inundation depth in different areas, the flood process was divided into four stages: absorption, resistance, overload and recovery based on the once in 50 years rainfall standard. At the same time, based on the levels of slope, GDP, population, hospital affected area, land use type, road network density and other aspects, the resilience curve was applied to evaluate the urban flood resilience of different regional units, and the difference of flood process of different precipitation in "720" rainstorm in Zhengzhou was analyzed. Faced with more than 1,000 years of rainstorm, most areas are quickly entering the stage of overload. The influence levels of factors in different areas are different, some areas with ramps or higher terrain have better resilience, and restore normal social order faster, that is, the recovery stage needs shorter time. Some low-lying areas or special terrain, such as tunnels, will enter the overload stage faster in the case of heavy rainfall. As a result, high levels of flood protection, water level warning systems and faster emergency response are needed in areas with low resilience and high risk. The building density of built-up area, population of densely populated area and road network density all have a certain negative impact on urban flood resistance, and the positive impact of slope on flood resilience is also very obvious. While hospitals can have positive effects on medical treatment, they also have negative effects such as population density and asset density when they encounter floods. The result of a separate comparison of the unit grid of hospitals shows that the resilience of hospitals in the distribution range is low when they encounter floods. Therefore, in addition to improving the flood resistance capacity of cities, through reasonable planning can also increase the flood response capacity of cities. Changes in these influencing factors can further improve urban flood resilience, such as raise design standards and the temporary water storage area when floods occur, train the response speed of emergency personnel and adjust emergency support equipment.

Keywords: urban flood resilience, resilience assessment, hydrodynamic model, resilience curve

Procedia PDF Downloads 40
6 The Strategic Importance of Technology in the International Production: Beyond the Global Value Chains Approach

Authors: Marcelo Pereira Introini

Abstract:

The global value chains (GVC) approach contributes to a better understanding of the international production organization amid globalization’s second unbundling from the 1970s on. Mainly due to the tools that help to understand the importance of critical competences, technological capabilities, and functions performed by each player, GVC research flourished in recent years, rooted in discussing the possibilities of integration and repositioning along regional and global value chains. Regarding this context, part of the literature endorsed a more optimistic view that engaging in fragmented production networks could represent learning opportunities for developing countries’ firms, since the relationship with transnational corporations could allow them build skills and competences. Increasing recognition that GVCs are based on asymmetric power relations provided another sight about benefits, costs, and development possibilities though. Once leading companies tend to restrict the replication of their technologies and capabilities by their suppliers, alternative strategies beyond the functional specialization, seen as a way to integrate value chains, began to be broadly highlighted. This paper organizes a coherent narrative about the shortcomings of the GVC analytical framework, while recognizing its multidimensional contributions and recent developments. We adopt two different and complementary perspectives to explore the idea of integration in the international production. On one hand, we emphasize obstacles beyond production components, analyzing the role played by intangible assets and intellectual property regimes. On the other hand, we consider the importance of domestic production and innovation systems for technological development. In order to provide a deeper understanding of the restrictions on technological learning of developing countries’ firms, we firstly build from the notion of intellectual monopoly to analyze how flagship companies can prevent subordinated firms from improving their positions in fragmented production networks. Based on intellectual property protection regimes we discuss the increasing asymmetries between these players and the decreasing access of part of them to strategic intangible assets. Second, we debate the role of productive-technological ecosystems and of interactive and systemic technological development processes, as concepts of the Innovation Systems approach. Supporting the idea that not only endogenous advantages are important for international competition of developing countries’ firms, but also that the building of these advantages itself can be a source of technological learning, we focus on local efforts as a crucial element, which is not replaceable for technology imported from abroad. Finally, the paper contributes to the discussion about technological development as a two-dimensional dynamic. If GVC analysis tends to underline a company-based perspective, stressing the learning opportunities associated to GVC integration, historical involvement of national States brings up the debate about technology as a central aspect of interstate disputes. In this sense, technology is seen as part of military modernization before being also used in civil contexts, what presupposes its role for national security and productive autonomy strategies. From this outlook, it is important to consider it as an asset that, incorporated in sophisticated machinery, can be the target of state policies besides the protection provided by intellectual property regimes, such as in export controls and inward-investment restrictions.

Keywords: global value chains, innovation systems, intellectual monopoly, technological development

Procedia PDF Downloads 81
5 Disabled Graduate Students’ Experiences and Vision of Change for Higher Education: A Participatory Action Research Study

Authors: Emily Simone Doffing, Danielle Kohfeldt

Abstract:

Disabled students are underrepresented in graduate-level degree enrollment and completion. There is limited research on disabled students' progression during the pandemic. Disabled graduate students (DGS) face unique interpersonal and institutional barriers, yet, limited research explores these barriers, buffering facilitators, and aids to academic persistence. This study adopts an asset-based, embodied disability approach using the critical pedagogy theoretical framework instead of the deficit research approach. The Participatory Action Research (PAR) paradigm, the critical pedagogy theoretical framework, and emancipatory disability research share the same purpose -creating a socially just world through reciprocal learning. This study is one of few, if not the first, to center solely on DGS’ lived understanding using a Participatory Action Research (PAR) epistemology. With a PAR paradigm, participants and investigators work as a research team democratically at every stage of the research process. PAR has individual and systemic outcomes. PAR lessens the researcher-participant power gap and elevates a marginalized community’s knowledge as expertise for local change. PAR and critical pedagogy work toward enriching everyone involved with empowerment, civic engagement, knowledge proliferation, socio-cultural reflection, skills development, and active meaning-making. The PAR process unveils the tensions between disability and graduate school in policy and practice during the pandemic. Likewise, institutional and ideological tensions influence the PAR process. This project is recruiting 10 DGS until September through purposive and snowball sampling. DGS will collectively practice praxis during four monthly focus groups in the fall 2023 semester. Participant researchers can attend a focus group or an interview, both with field notes. September will be our orientation and first monthly meeting. It will include access needs check-ins, ice breakers, consent form review, a group agreement, PAR introduction, research ethics discussion, research goals, and potential research topics. October and November will be available for meetings for dialogues about lived experiences during our collaborative data collection. Our sessions can be semi-structured with “framing questions,” which would be revised together. Field notes include observations that cannot be captured through audio. December will focus on local social action planning and dissemination. Finally, in January, there will be a post-study focus group for students' reflections on their experiences of PAR. Iterative analysis methods include transcribed audio, reflexivity, memos, thematic coding, analytic triangulation, and member checking. This research follows qualitative rigor and quality criteria: credibility, transferability, confirmability, and psychopolitical validity. Results include potential tension points, social action, individual outcomes, and recommendations for conducting PAR. Tension points have three components: dubious practices, contestable knowledge, and conflict. The dissemination of PAR recommendations will aid and encourage researchers to conduct future PAR projects with the disabled community. Identified stakeholders will be informed of DGS’ insider knowledge to drive social sustainability.

Keywords: participatory action research, graduate school, disability, higher education

Procedia PDF Downloads 62
4 Internet of Assets: A Blockchain-Inspired Academic Program

Authors: Benjamin Arazi

Abstract:

Blockchain is the technology behind cryptocurrencies like Bitcoin. It revolutionizes the meaning of trust in the sense of offering total reliability without relying on any central entity that controls or supervises the system. The Wall Street Journal states: “Blockchain Marks the Next Step in the Internet’s Evolution”. Blockchain was listed as #1 in Linkedin – The Learning Blog “most in-demand hard skills needed in 2020”. As stated there: “Blockchain’s novel way to store, validate, authorize, and move data across the internet has evolved to securely store and send any digital asset”. GSMA, a leading Telco organization of mobile communications operators, declared that “Blockchain has the potential to be for value what the Internet has been for information”. Motivated by these seminal observations, this paper presents the foundations of a Blockchain-based “Internet of Assets” academic program that joins under one roof leading application areas that are characterized by the transfer of assets over communication lines. Two such areas, which are pillars of our economy, are Fintech – Financial Technology and mobile communications services. The next application in line is Healthcare. These challenges are met based on available extensive professional literature. Blockchain-based assets communication is based on extending the principle of Bitcoin, starting with the basic question: If digital money that travels across the universe can ‘prove its own validity’, can this principle be applied to digital content. A groundbreaking positive answer here led to the concept of “smart contract” and consequently to DLT - Distributed Ledger Technology, where the word ‘distributed’ relates to the non-existence of reliable central entities or trusted third parties. The terms Blockchain and DLT are frequently used interchangeably in various application areas. The World Bank Group compiled comprehensive reports, analyzing the contribution of DLT/Blockchain to Fintech. The European Central Bank and Bank of Japan are engaged in Project Stella, “Balancing confidentiality and auditability in a distributed ledger environment”. 130 DLT/Blockchain focused Fintech startups are now operating in Switzerland. Blockchain impact on mobile communications services is treated in detail by leading organizations. The TM Forum is a global industry association in the telecom industry, with over 850 member companies, mainly mobile operators, that generate US$2 trillion in revenue and serve five billion customers across 180 countries. From their perspective: “Blockchain is considered one of the digital economy’s most disruptive technologies”. Samples of Blockchain contributions to Fintech (taken from a World Bank document): Decentralization and disintermediation; Greater transparency and easier auditability; Automation & programmability; Immutability & verifiability; Gains in speed and efficiency; Cost reductions; Enhanced cyber security resilience. Samples of Blockchain contributions to the Telco industry. Establishing identity verification; Record of transactions for easy cost settlement; Automatic triggering of roaming contract which enables near-instantaneous charging and reduction in roaming fraud; Decentralized roaming agreements; Settling accounts per costs incurred in accordance with agreement tariffs. This clearly demonstrates an academic education structure where fundamental technologies are studied in classes together with these two application areas. Advanced courses, treating specific implementations then follow separately. All are under the roof of “Internet of Assets”.

Keywords: blockchain, education, financial technology, mobile telecommunications services

Procedia PDF Downloads 180
3 Mining and Ecological Events and its Impact on the Genesis and Geo-Distribution of Ebola Outbreaks in Africa

Authors: E Tambo, O. O. Olalubi, E. C. Ugwu, J. Y. Ngogang

Abstract:

Despite the World Health Organization (WHO) declaration of international health emergency concern, the status quo of responses and efforts to stem the worst-recorded Ebola epidemic Ebola outbreak is still precariously inadequate in most of the affected in West. Mining natural resources have been shown to play a key role in both motivating and fuelling ethnic, civil and armed conflicts that have plagued a number of African countries over the last decade. Revenues from the exploitation of natural resources are not only used in sustaining the national economy but also armies, personal enrichment and building political support. Little is documented on the mining and ecological impact on the emergence and geographical distribution of Ebola in Africa over time and space. We aimed to provide a better understanding of the interconnectedness among issues of mining natural, resource management, mining conflict and post-conflict on Ebola outbreak and how wealth generated from abundant natural resources could be better managed in promoting research and development towards strengthening environmental, socioeconomic and health systems sustainability on Ebola outbreak and other emerging diseases surveillance and responses systems prevention and control, early warning alert, durable peace and sustainable development rather than to fuel conflicts, resurgence and emerging diseases epidemics in the perspective of community and national/regional approach. Our results showed the first assessment of systematic impact of all major minerals conflict events diffusion over space and time and mining activities on nine Ebola genesis and geo-distribution in affected countries across Africa. We demonstrate how, where and when mining activities in Africa increase ecological degradation, conflicts at the local level and then spreads violence across territory and time by enhancing the financial capacities of fighting groups/ethnics and diseases onset. In addition, led process of developing minimum standards for natural resource governance; improving governmental and civil society capacity for natural resource management, including the strengthening of monitoring and enforcement mechanisms; understanding the post-mining and conflicts community or national reconstruction and rehabilitation programmes in strengthening or developing community health systems and regulatory mechanisms. In addition the quest for the control over these resources and illegal mining across the landscape forest incursion provided increase environmental and ecological instability and displacement and disequilibrium, therefore affecting the intensity and duration of mining and conflict/wars and episode of Ebola outbreaks over time and space. We highlight the key findings and lessons learnt in promoting country or community-led process in transforming natural resource wealth from a peace liability to a peace asset. The imperative necessity for advocacy and through facilitating intergovernmental deliberations on critical issues and challenges affecting Africa community transforming exploitation of natural resources from a peace liability to outbreak prevention and control. The vital role of mining in increasing government revenues and expenditures, equitable distribution of wealth and health to all stakeholders, in particular local communities requires coordination, cooperative leadership and partnership in fostering sustainable developmental initiatives from mining context to outbreak and other infectious diseases surveillance responses systems in prevention and control, and judicious resource management.

Keywords: mining, mining conflicts, mines, ecological, Ebola, outbreak, mining companies, miners, impact

Procedia PDF Downloads 302
2 Critical Factors for Successful Adoption of Land Value Capture Mechanisms – An Exploratory Study Applied to Indian Metro Rail Context

Authors: Anjula Negi, Sanjay Gupta

Abstract:

Paradigms studied inform inadequacies of financial resources, be it to finance metro rails for construction or to meet operational revenues or to derive profits in the long term. Funding sustainability is far and wide for much-needed public transport modes, like urban rail or metro rails, to be successfully operated. India embarks upon a sustainable transport journey and has proposed metro rail systems countrywide. As an emerging economic leader, its fiscal constraints are paramount, and the land value capture (LVC) mechanism provides necessary support and innovation toward development. India’s metro rail policy promotes multiple methods of financing, including private-sector investments and public-private-partnership. The critical question that remains to be addressed is what factors can make such mechanisms work. Globally, urban rail is a revolution noted by many researchers as future mobility. Researchers in this study deep dive by way of literature review and empirical assessments into factors that can lead to the adoption of LVC mechanisms. It is understood that the adoption of LVC methods is in the nascent stages in India. Research posits numerous challenges being faced by metro rail agencies in raising funding and for incremental value capture. A few issues pertaining to land-based financing, inter alia: are long-term financing, inter-institutional coordination, economic/ market suitability, dedicated metro funds, land ownership issues, piecemeal approach to real estate development, property development legal frameworks, etc. The question under probe is what are the parameters that can lead to success in the adoption of land value capture (LVC) as a financing mechanism. This research provides insights into key parameters crucial to the adoption of LVC in the context of Indian metro rails. Researchers have studied current forms of LVC mechanisms at various metro rails of the country. This study is significant as little research is available on the adoption of LVC, which is applicable to the Indian context. Transit agencies, State Government, Urban Local Bodies, Policy makers and think tanks, Academia, Developers, Funders, Researchers and Multi-lateral agencies may benefit from this research to take ahead LVC mechanisms in practice. The study deems it imperative to explore and understand key parameters that impact the adoption of LVC. Extensive literature review and ratification by experts working in the metro rails arena were undertaken to arrive at parameters for the study. Stakeholder consultations in the exploratory factor analysis (EFA) process were undertaken for principal component extraction. 43 seasoned and specialized experts participated in a semi-structured questionnaire to scale the maximum likelihood on each parameter, represented by various types of stakeholders. Empirical data was collected on chosen eighteen parameters, and significant correlation was extracted for output descriptives and inferential statistics. Study findings reveal these principal components as institutional governance framework, spatial planning features, legal frameworks, funding sustainability features and fiscal policy measures. In particular, funding sustainability features highlight sub-variables of beneficiaries to pay and use of multiple revenue options towards success in LVC adoption. Researchers recommend incorporation of these variables during early stage in design and project structuring for success in adoption of LVC. In turn leading to improvements in revenue sustainability of a public transport asset and help in undertaking informed transport policy decisions.

Keywords: Exploratory factor analysis, land value capture mechanism, financing metro rails, revenue sustainability, transport policy

Procedia PDF Downloads 81
1 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications

Authors: Atish Bagchi, Siva Chandrasekaran

Abstract:

Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.

Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning

Procedia PDF Downloads 150