Search results for: corporate operational complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3695

Search results for: corporate operational complexity

395 Digital Twins: Towards an Overarching Framework for the Built Environment

Authors: Astrid Bagireanu, Julio Bros-Williamson, Mila Duncheva, John Currie

Abstract:

Digital Twins (DTs) have entered the built environment from more established industries like aviation and manufacturing, although there has never been a common goal for utilising DTs at scale. Defined as the cyber-physical integration of data between an asset and its virtual counterpart, DT has been identified in literature from an operational standpoint – in addition to monitoring the performance of a built asset. However, this has never been translated into how DTs should be implemented into a project and what responsibilities each project stakeholder holds in the realisation of a DT. What is needed is an approach to translate these requirements into actionable DT dimensions. This paper presents a foundation for an overarching framework specific to the built environment. For the purposes of this research, the UK widely used the Royal Institute of British Architects (RIBA) Plan of Work from 2020 is used as a basis for itemising project stages. The RIBA Plan of Work consists of eight stages designed to inform on the definition, briefing, design, coordination, construction, handover, and use of a built asset. Similar project stages are utilised in other countries; therefore, the recommendations from the interviews presented in this paper are applicable internationally. Simultaneously, there is not a single mainstream software resource that leverages DT abilities. This ambiguity meets an unparalleled ambition from governments and industries worldwide to achieve a national grid of interconnected DTs. For the construction industry to access these benefits, it necessitates a defined starting point. This research aims to provide a comprehensive understanding of the potential applications and ramifications of DT in the context of the built environment. This paper is an integral part of a larger research aimed at developing a conceptual framework for the Architecture, Engineering, and Construction (AEC) sector following a conventional project timeline. Therefore, this paper plays a pivotal role in providing practical insights and a tangible foundation for developing a stage-by-stage approach to assimilate the potential of DT within the built environment. First, the research focuses on a review of relevant literature, albeit acknowledging the inherent constraint of limited sources available. Secondly, a qualitative study compiling the views of 14 DT experts is presented, concluding with an inductive analysis of the interview findings - ultimately highlighting the barriers and strengths of DT in the context of framework development. As parallel developments aim to progress net-zero-centred design and improve project efficiencies across the built environment, the limited resources available to support DTs should be leveraged to propel the industry to reach its digitalisation era, in which AEC stakeholders have a fundamental role in understanding this, from the earliest stages of a project.

Keywords: digital twins, decision-making, design, net-zero, built environment

Procedia PDF Downloads 103
394 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission

Authors: Tingwei Shu, Dong Zhou, Chengjun Guo

Abstract:

Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.

Keywords: semantic communication, transformer, wavelet transform, data processing

Procedia PDF Downloads 68
393 A Brazilian Study Applied to the Regulatory Environmental Issues of Nanomaterials

Authors: Luciana S. Almeida

Abstract:

Nanotechnology has revolutionized the world of science and technology bringing great expectations due to its great potential of application in the most varied industrial sectors. The same characteristics that make nanoparticles interesting from the point of view of the technological application, these may be undesirable when released into the environment. The small size of nanoparticles facilitates their diffusion and transport in the atmosphere, water, and soil and facilitates the entry and accumulation of nanoparticles in living cells. The main objective of this study is to evaluate the environmental regulatory process of nanomaterials in the Brazilian scenario. Three specific objectives were outlined. The first is to carry out a global scientometric study, in a research platform, with the purpose of identifying the main lines of study of nanomaterials in the environmental area. The second is to verify how environmental agencies in other countries have been working on this issue by means of a bibliographic review. And the third is to carry out an assessment of the Brazilian Nanotechnology Draft Law 6741/2013 with the state environmental agencies. This last one has the aim of identifying the knowledge of the subject by the environmental agencies and necessary resources available in the country for the implementation of the Policy. A questionnaire will be used as a tool for this evaluation to identify the operational elements and build indicators through the Environment of Evaluation Application, a computational application developed for the development of questionnaires. At the end will be verified the need to propose changes in the Draft Law of the National Nanotechnology Policy. Initial studies, in relation to the first specific objective, have already identified that Brazil stands out in the production of scientific publications in the area of nanotechnology, although the minority is in studies focused on environmental impact studies. Regarding the general panorama of other countries, some findings have also been raised. The United States has included the nanoform of the substances in an existing program in the EPA (Environmental Protection Agency), the TSCA (Toxic Substances Control Act). The European Union issued a draft of a document amending Regulation 1907/2006 of the European Parliament and Council to cover the nanoform of substances. Both programs are based on the study and identification of environmental risks associated with nanomaterials taking into consideration the product life cycle. In relation to Brazil, regarding the third specific objective, it is notable that the country does not have any regulations applicable to nanostructures, although there is a Draft Law in progress. In this document, it is possible to identify some requirements related to the environment, such as environmental inspection and licensing; industrial waste management; notification of accidents and application of sanctions. However, it is not known if these requirements are sufficient for the prevention of environmental impacts and if national environmental agencies will know how to apply them correctly. This study intends to serve as a basis for future actions regarding environmental management applied to the use of nanotechnology in Brazil.

Keywords: environment; management; nanotecnology; politics

Procedia PDF Downloads 108
392 Safety Tolerance Zone for Driver-Vehicle-Environment Interactions under Challenging Conditions

Authors: Matjaž Šraml, Marko Renčelj, Tomaž Tollazzi, Chiara Gruden

Abstract:

Road safety is a worldwide issue with numerous and heterogeneous factors influencing it. On the side, driver state – comprising distraction/inattention, fatigue, drowsiness, extreme emotions, and socio-cultural factors highly affect road safety. On the other side, the vehicle state has an important role in mitigating (or not) the road risk. Finally, the road environment is still one of the main determinants of road safety, defining driving task complexity. At the same time, thanks to technological development, a lot of detailed data is easily available, creating opportunities for the detection of driver state, vehicle characteristics and road conditions and, consequently, for the design of ad hoc interventions aimed at improving driver performance, increase awareness and mitigate road risks. This is the challenge faced by the i-DREAMS project. i-DREAMS, which stands for a smart Driver and Road Environment Assessment and Monitoring System, is a 3-year project funded by the European Union’s Horizon 2020 research and innovation program. It aims to set up a platform to define, develop, test and validate a ‘Safety Tolerance Zone’ to prevent drivers from getting too close to the boundaries of unsafe operation by mitigating risks in real-time and after the trip. After the definition and development of the Safety Tolerance Zone concept and the concretization of the same in an Advanced driver-assistance system (ADAS) platform, the system was tested firstly for 2 months in a driving simulator environment in 5 different countries. After that, naturalistic driving studies started for a 10-month period (comprising a 1-month pilot study, 3-month baseline study and 6 months study implementing interventions). Currently, the project team has approved a common evaluation approach, and it is developing the assessment of the usage and outcomes of the i-DREAMS system, which is turning positive insights. The i-DREAMS consortium consists of 13 partners, 7 engineering universities and research groups, 4 industry partners and 2 partners (European Transport Safety Council - ETSC - and POLIS cities and regions for transport innovation) closely linked to transport safety stakeholders, covering 8 different countries altogether.

Keywords: advanced driver assistant systems, driving simulator, safety tolerance zone, traffic safety

Procedia PDF Downloads 53
391 Applying GIS Geographic Weighted Regression Analysis to Assess Local Factors Impeding Smallholder Farmers from Participating in Agribusiness Markets: A Case Study of Vihiga County, Western Kenya

Authors: Mwehe Mathenge, Ben G. J. S. Sonneveld, Jacqueline E. W. Broerse

Abstract:

Smallholder farmers are important drivers of agriculture productivity, food security, and poverty reduction in Sub-Saharan Africa. However, they are faced with myriad challenges in their efforts at participating in agribusiness markets. How the geographic explicit factors existing at the local level interact to impede smallholder farmers' decision to participates (or not) in agribusiness markets is not well understood. Deconstructing the spatial complexity of the local environment could provide a deeper insight into how geographically explicit determinants promote or impede resource-poor smallholder farmers from participating in agribusiness. This paper’s objective was to identify, map, and analyze local spatial autocorrelation in factors that impede poor smallholders from participating in agribusiness markets. Data were collected using geocoded researcher-administered survey questionnaires from 392 households in Western Kenya. Three spatial statistics methods in geographic information system (GIS) were used to analyze data -Global Moran’s I, Cluster and Outliers Analysis (Anselin Local Moran’s I), and geographically weighted regression. The results of Global Moran’s I reveal the presence of spatial patterns in the dataset that was not caused by spatial randomness of data. Subsequently, Anselin Local Moran’s I result identified spatially and statistically significant local spatial clustering (hot spots and cold spots) in factors hindering smallholder participation. Finally, the geographically weighted regression results unearthed those specific geographic explicit factors impeding market participation in the study area. The results confirm that geographically explicit factors are indispensable in influencing the smallholder farming decisions, and policymakers should take cognizance of them. Additionally, this research demonstrated how geospatial explicit analysis conducted at the local level, using geographically disaggregated data, could help in identifying households and localities where the most impoverished and resource-poor smallholder households reside. In designing spatially targeted interventions, policymakers could benefit from geospatial analysis methods in understanding complex geographic factors and processes that interact to influence smallholder farmers' decision-making processes and choices.

Keywords: agribusiness markets, GIS, smallholder farmers, spatial statistics, disaggregated spatial data

Procedia PDF Downloads 130
390 Tailoring Quantum Oscillations of Excitonic Schrodinger’s Cats as Qubits

Authors: Amit Bhunia, Mohit Kumar Singh, Maryam Al Huwayz, Mohamed Henini, Shouvik Datta

Abstract:

We report [https://arxiv.org/abs/2107.13518] experimental detection and control of Schrodinger’s Cat like macroscopically large, quantum coherent state of a two-component Bose-Einstein condensate of spatially indirect electron-hole pairs or excitons using a resonant tunneling diode of III-V Semiconductors. This provides access to millions of excitons as qubits to allow efficient, fault-tolerant quantum computation. In this work, we measure phase-coherent periodic oscillations in photo-generated capacitance as a function of an applied voltage bias and light intensity over a macroscopically large area. Periodic presence and absence of splitting of excitonic peaks in the optical spectra measured by photocapacitance point towards tunneling induced variations in capacitive coupling between the quantum well and quantum dots. Observation of negative ‘quantum capacitance’ due to a screening of charge carriers by the quantum well indicates Coulomb correlations of interacting excitons in the plane of the sample. We also establish that coherent resonant tunneling in this well-dot heterostructure restricts the available momentum space of the charge carriers within this quantum well. Consequently, the electric polarization vector of the associated indirect excitons collective orients along the direction of applied bias and these excitons undergo Bose-Einstein condensation below ~100 K. Generation of interference beats in photocapacitance oscillation even with incoherent white light further confirm the presence of stable, long-range spatial correlation among these indirect excitons. We finally demonstrate collective Rabi oscillations of these macroscopically large, ‘multipartite’, two-level, coupled and uncoupled quantum states of excitonic condensate as qubits. Therefore, our study not only brings the physics and technology of Bose-Einstein condensation within the reaches of semiconductor chips but also opens up experimental investigations of the fundamentals of quantum physics using similar techniques. Operational temperatures of such two-component excitonic BEC can be raised further with a more densely packed, ordered array of QDs and/or using materials having larger excitonic binding energies. However, fabrications of single crystals of 0D-2D heterostructures using 2D materials (e.g. transition metal di-chalcogenides, oxides, perovskites etc.) having higher excitonic binding energies are still an open challenge for semiconductor optoelectronics. As of now, these 0D-2D heterostructures can already be scaled up for mass production of miniaturized, portable quantum optoelectronic devices using the existing III-V and/or Nitride based semiconductor fabrication technologies.

Keywords: exciton, Bose-Einstein condensation, quantum computation, heterostructures, semiconductor Physics, quantum fluids, Schrodinger's Cat

Procedia PDF Downloads 174
389 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 102
388 Varieties of Capitalism and Small Business CSR: A Comparative Overview

Authors: Stéphanie Looser, Walter Wehrmeyer

Abstract:

Given the limited research on Small and Mediumsized Enterprises’ (SMEs) contribution to Corporate Social Responsibility (CSR) and even scarcer research on Swiss SMEs, this paper helps to fill these gaps by enabling the identification of supranational SME parameters and to make a contribution to the evolving field of these topics. Thus, the paper investigates the current state of SME practices in Switzerland and across 15 other countries. Combining the degree to which SMEs demonstrate an explicit (or business case) approach or see CSR as an implicit moral activity with the assessment of their attributes for “variety of capitalism” defines the framework of this comparative analysis. According to previous studies, liberal market economies, e.g. in the United States (US) or United Kingdom (UK), are aligned with extrinsic CSR, while coordinated market systems (in Central European or Asian countries) evolve implicit CSR agendas. To outline Swiss small business CSR patterns in particular, 40 SME owner-managers were interviewed. The transcribed interviews were coded utilising MAXQDA for qualitative content analysis. A secondary data analysis of results from different countries (i.e., Australia, Austria, Chile, Cameroon, Catalonia (notably a part of Spain that seeks autonomy), China, Finland, Germany, Hong Kong (a special administrative region of China), Italy, Netherlands, Singapore, Spain, Taiwan, UK, US) lays groundwork for this comparative study on small business CSR. Applying the same coding categories (in MAXQDA) for the interview analysis as well as for the secondary data research while following grounded theory rules to refine and keep track of ideas generated testable hypotheses and comparative power on implicit (and the lower likelihood of explicit) CSR in SMEs retrospectively. The paper identifies Swiss small business CSR as deep, profound, “soul”, and an implicit part of the day-to-day business. Similar to most Central European, Mediterranean, Nordic, and Asian countries, explicit CSR is still very rare in Swiss SMEs. Astonishingly, also UK and US SMEs follow this pattern in spite of their strong and distinct liberal market economies. Though other findings show that nationality matters this research concludes that SME culture and its informal CSR agenda are strongly formative and superseding even forces of market economies, nationally cultural patterns, and language. In a world of “big business”, explicit “business case” CSR, and the mantra that “CSR must pay”, this study points to a distinctly implicit small business CSR model built on trust, physical closeness, and virtues that is largely detached from the bottom line. This pattern holds for different cultural contexts and it is concluded that SME culture is stronger than nationality leading to a supra-national, monolithic SME CSR approach. Hence, classifications of countries by their market system or capitalism, as found in the comparative capitalism literature, do not match the CSR practices in SMEs as they do not mirror the peculiarities of their business. This raises questions on the universality and generalisability of management concepts.

Keywords: CSR, comparative study, cultures of capitalism, small, medium-sized enterprises

Procedia PDF Downloads 418
387 Integrating System-Level Infrastructure Resilience and Sustainability Based on Fractal: Perspectives and Review

Authors: Qiyao Han, Xianhai Meng

Abstract:

Urban infrastructures refer to the fundamental facilities and systems that serve cities. Due to the global climate change and human activities in recent years, many urban areas around the world are facing enormous challenges from natural and man-made disasters, like flood, earthquake and terrorist attack. For this reason, urban resilience to disasters has attracted increasing attention from researchers and practitioners. Given the complexity of infrastructure systems and the uncertainty of disasters, this paper suggests that studies of resilience could focus on urban functional sustainability (in social, economic and environmental dimensions) supported by infrastructure systems under disturbance. It is supposed that urban infrastructure systems with high resilience should be able to reconfigure themselves without significant declines in critical functions (services), such as primary productivity, hydrological cycles, social relations and economic prosperity. Despite that some methods have been developed to integrate the resilience and sustainability of individual infrastructure components, more work is needed to enable system-level integration. This research presents a conceptual analysis framework for integrating resilience and sustainability based on fractal theory. It is believed that the ability of an ecological system to maintain structure and function in face of disturbance and to reorganize following disturbance-driven change is largely dependent on its self-similar and hierarchical fractal structure, in which cross-scale resilience is produced by the replication of ecosystem processes dominating at different levels. Urban infrastructure systems are analogous to ecological systems because they are interconnected, complex and adaptive, are comprised of interconnected components, and exhibit characteristic scaling properties. Therefore, analyzing resilience of ecological system provides a better understanding about the dynamics and interactions of infrastructure systems. This paper discusses fractal characteristics of ecosystem resilience, reviews literature related to system-level infrastructure resilience, identifies resilience criteria associated with sustainability dimensions, and develops a conceptual analysis framework. Exploration of the relevance of identified criteria to fractal characteristics reveals that there is a great potential to analyze infrastructure systems based on fractal. In the conceptual analysis framework, it is proposed that in order to be resilient, urban infrastructure system needs to be capable of “maintaining” and “reorganizing” multi-scale critical functions under disasters. Finally, the paper identifies areas where further research efforts are needed.

Keywords: fractal, urban infrastructure, sustainability, system-level resilience

Procedia PDF Downloads 259
386 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 277
385 Targeting and Developing the Remaining Pay in an Ageing Field: The Ovhor Field Experience

Authors: Christian Ihwiwhu, Nnamdi Obioha, Udeme John, Edward Bobade, Oghenerunor Bekibele, Adedeji Awujoola, Ibi-Ada Itotoi

Abstract:

Understanding the complexity in the distribution of hydrocarbon in a simple structure with flow baffles and connectivity issues is critical in targeting and developing the remaining pay in a mature asset. Subtle facies changes (heterogeneity) can have a drastic impact on reservoir fluids movement, and this can be crucial to identifying sweet spots in mature fields. This study aims to evaluate selected reservoirs in Ovhor Field, Niger Delta, Nigeria, with the objective of optimising production from the field by targeting undeveloped oil reserves, bypassed pay, and gaining an improved understanding of the selected reservoirs to increase the company’s reservoir limits. The task at the Ovhor field is complicated by poor stratigraphic seismic resolution over the field. 3-D geological (sedimentology and stratigraphy) interpretation, use of results from quantitative interpretation, and proper understanding of production data have been used in recognizing flow baffles and undeveloped compartments in the field. The full field 3-D model has been constructed in such a way as to capture heterogeneities and the various compartments in the field to aid the proper simulation of fluid flow in the field for future production prediction, proper history matching and design of good trajectories to adequately target undeveloped oil in the field. Reservoir property models (porosity, permeability, and net-to-gross) have been constructed by biasing log interpreted properties to a defined environment of deposition model whose interpretation captures the heterogeneities expected in the studied reservoirs. At least, two scenarios have been modelled for most of the studied reservoirs to capture the range of uncertainties we are dealing with. The total original oil in-place volume for the four reservoirs studied is 157 MMstb. The cumulative oil and gas production from the selected reservoirs are 67.64 MMstb and 9.76 Bscf respectively, with current production rate of about 7035 bopd and 4.38 MMscf/d (as at 31/08/2019). Dynamic simulation and production forecast on the 4 reservoirs gave an undeveloped reserve of about 3.82 MMstb from two (2) identified oil restoration activities. These activities include side-tracking and re-perforation of existing wells. This integrated approach led to the identification of bypassed oil in some areas of the selected reservoirs and an improved understanding of the studied reservoirs. New wells have/are being drilled now to test the results of our studies, and the results are very confirmatory and satisfying.

Keywords: facies, flow baffle, bypassed pay, heterogeneities, history matching, reservoir limit

Procedia PDF Downloads 117
384 Understanding the Challenges of Lawbook Translation via the Framework of Functional Theory of Language

Authors: Tengku Sepora Tengku Mahadi

Abstract:

Where the speed of book writing lags behind the high need for such material for tertiary studies, translation offers a way to enhance the equilibrium in this demand-supply equation. Nevertheless, translation is confronted by obstacles that threaten its effectiveness. The primary challenge to the production of efficient translations may well be related to the text-type and in terms of its complexity. A text that is intricately written with unique rhetorical devices, subject-matter foundation and cultural references will undoubtedly challenge the translator. Longer time and greater effort would be the consequence. To understand these text-related challenges, the present paper set out to analyze a lawbook entitled Learning the Law by David Melinkoff. The book is chosen because it has often been used as a textbook or for reference in many law courses in the United Kingdom and has seen over thirteen editions; therefore, it can be said to be a worthy book for studies in law. Another reason is the existence of a ready translation in Malay. Reference to this translation enables confirmation to some extent of the potential problems that might occur in its translation. Understanding the organization and the language of the book will help translators to prepare themselves better for the task. They can anticipate the research and time that may be needed to produce an effective translation. Another premise here is that this text-type implies certain ways of writing and organization. Accordingly, it seems practicable to adopt the functional theory of language as suggested by Michael Halliday as its theoretical framework. Concepts of the context of culture, the context of situation and measures of the field, tenor and mode form the instruments for analysis. Additional examples from similar materials can also be used to validate the findings. Some interesting findings include the presence of several other text-types or sub-text-types in the book and the dependence on literary discourse and devices to capture the meanings better or add color to the dry field of law. In addition, many elements of culture can be seen, for example, the use of familiar alternatives, allusions, and even terminology and references that date back to various periods of time and languages. Also found are parts which discuss origins of words and terms that may be relevant to readers within the United Kingdom but make little sense to readers of the book in other languages. In conclusion, the textual analysis in terms of its functions and the linguistic and textual devices used to achieve them can then be applied as a guide to determine the effectiveness of the translation that is produced.

Keywords: functional theory of language, lawbook text-type, rhetorical devices, culture

Procedia PDF Downloads 133
383 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications

Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino

Abstract:

The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.

Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses

Procedia PDF Downloads 174
382 Barrier Analysis of Sustainable Development of Small Towns: A Perspective of Southwest China

Authors: Yitian Ren, Liyin Shen, Tao Zhou, Xiao Li

Abstract:

The past urbanization process in China has brought out series of problems, the Chinese government has then positioned small towns in essential roles for implementing the strategy 'The National New-type Urbanization Plan (2014-2020)'. As the connector and transfer station of cities and countryside, small towns are important force to narrow the gap between urban and rural area, and to achieve the mission of new-type urbanization in China. The sustainable development of small towns plays crucial role because cities are not capable enough to absorb the surplus rural population. Nevertheless, there are various types of barriers hindering the sustainable development of small towns, which led to the limited development of small towns and has presented a bottleneck in Chinese urbanization process. Therefore, this paper makes deep understanding of these barriers, thus effective actions can be taken to address them. And this paper chooses the perspective of Southwest China (refers to Sichuan province, Yunnan province, Guizhou province, Chongqing Municipality City and Tibet Autonomous Region), cause the urbanization rate in Southwest China is far behind the average urbanization level of the nation and the number of small towns accounts for a great proportion in mainland China, also the characteristics of small towns in Southwest China are distinct. This paper investigates the barriers of sustainable development of small towns which located in Southwest China by using the content analysis method, combing with the field work and interviews in sample small towns, then identified and concludes 18 barriers into four dimensions, namely, institutional barriers, economic barriers, social barriers and ecological barriers. Based on the research above, questionnaire survey and data analysis are implemented, thus the key barriers hinder the sustainable development of small towns in Southwest China are identified by using fuzzy set theory, those barriers are, lack of independent financial power, lack of construction land index, financial channels limitation, single industrial structure, topography variety and complexity, which mainly belongs to institutional barriers and economic barriers. In conclusion part, policy suggestions are come up with to improve the politic and institutional environment of small town development, also the market mechanism are supposed to be introduced to the development process of small towns, which can effectively overcome the economic barriers, promote the sustainable development of small towns, accelerate the in-situ urbanization by absorbing peasants in nearby villages, and achieve the mission of new-type urbanization in China from the perspective of people-oriented.

Keywords: barrier analysis, sustainable development, small town, Southwest China

Procedia PDF Downloads 333
381 Social Network Roles in Organizations: Influencers, Bridges, and Soloists

Authors: Sofia Dokuka, Liz Lockhart, Alex Furman

Abstract:

Organizational hierarchy, traditionally composed of individual contributors, middle management, and executives, is enhanced by the understanding of informal social roles. These roles, identified with organizational network analysis (ONA), might have an important effect on organizational functioning. In this paper, we identify three social roles – influencers, bridges, and soloists, and provide empirical analysis based on real-world organizational networks. Influencers are employees with broad networks and whose contacts also have rich networks. Influence is calculated using PageRank, initially proposed for measuring website importance, but now applied in various network settings, including social networks. Influencers, having high PageRank, become key players in shaping opinions and behaviors within an organization. Bridges serve as links between loosely connected groups within the organization. Bridges are identified using betweenness and Burt’s constraint. Betweenness quantifies a node's control over information flows by evaluating its role in the control over the shortest paths within the network. Burt's constraint measures the extent of interconnection among an individual's contacts. A high constraint value suggests fewer structural holes and lesser control over information flows, whereas a low value suggests the contrary. Soloists are individuals with fewer than 5 stable social contacts, potentially facing challenges due to reduced social interaction and hypothetical lack of feedback and communication. We considered social roles in the analysis of real-world organizations (N=1,060). Based on data from digital traces (Slack, corporate email and calendar) we reconstructed an organizational communication network and identified influencers, bridges and soloists. We also collected employee engagement data through an online survey. Among the top-5% of influencers, 10% are members of the Executive Team. 56% of the Executive Team members are part of the top influencers group. The same proportion of top influencers (10%) is individual contributors, accounting for just 0.6% of all individual contributors in the company. The majority of influencers (80%) are at the middle management level. Out of all middle managers, 19% hold the role of influencers. However, individual contributors represent a small proportion of influencers, and having information about these individuals who hold influential roles can be crucial for management in identifying high-potential talents. Among the bridges, 4% are members of the Executive Team, 16% are individual contributors, and 80% are middle management. Predominantly middle management acts as a bridge. Bridge positions of some members of the executive team might indicate potential micromanagement on the leader's part. Recognizing the individuals serving as bridges in an organization uncovers potential communication problems. The majority of soloists are individual contributors (96%), and 4% of soloists are from middle management. These managers might face communication difficulties. We found an association between being an influencer and attitude toward a company's direction. There is a statistically significant 20% higher perception that the company is headed in the right direction among influencers compared to non-influencers (p < 0.05, Mann-Whitney test). Taken together, we demonstrate that considering social roles in the company might indicate both positive and negative aspects of organizational functioning that should be considered in data-driven decision-making.

Keywords: organizational network analysis, social roles, influencer, bridge, soloist

Procedia PDF Downloads 89
380 Captive Insurance in Hong Kong and Singapore: A Promising Risk Management Solution for Asian Companies

Authors: Jin Sheng

Abstract:

This paper addresses a promising area of insurance sector to develop in Asia. Captive insurance, which provides risk-mitigation services for its parent company, has great potentials to develop in energy, infrastructure, agriculture, logistics, catastrophe, and alternative risk transfer (ART), and will greatly affect the framework of insurance industry. However, the Asian captive insurance market only takes a small proportion in the global market. The recent supply chain interruption case of Hanjin Shipping indicates the significance of risk management for an Asian company’s sustainability and resilience. China has substantial needs and great potentials to develop captive insurance, on account of the currency volatility, enterprises’ credit risks, and legal and operational risks of the Belt and Road initiative. Up to date, Mainland Chinese enterprises only have four offshore captives incorporated by CNOOC, Sinopec, Lenovo and CGN Power), three onshore captive insurance companies incorporated by CNPC, China Railway, and COSCO, as well as one industrial captive insurance organization - China Ship-owners Mutual Assurance Association. Its captive market grows slowly with one or two captive insurers licensed yearly after September 2011. As an international financial center, Hong Kong has comparative advantages in taxation, professionals, market access and well-established financial infrastructure to develop a functional captive insurance market. For example, Hong Kong’s income tax for an insurance company is 16.5%; while China's income tax for an insurance company is 25% plus business tax of 5%. Furthermore, restrictions on market entry and operations of China’s onshore captives make establishing offshore captives in international or regional captive insurance centers such as Singapore, Hong Kong, and other overseas jurisdictions to become attractive options. Thus, there are abundant business opportunities in this area. Using methodology of comparative studies and case analysis, this paper discusses the incorporation, regulatory issues, taxation and prospect of captive insurance market in Hong Kong, China and Singapore. Hong Kong and Singapore are both international financial centers with prominent advantages in tax concessions, technology, implementation, professional services, and well-functioning legal system. Singapore, as the domicile of 71 active captives, has been the largest captive insurance hub in Asia, as well as an established reinsurance hub. Hong Kong is an emerging captive insurance hub with 5 to 10 newly licensed captives each year, according to the Hong Kong Financial Services Development Council. It is predicted that Hong Kong will become a domicile for 50 captive insurers by 2025. This paper also compares the formation of a captive in Singapore with other jurisdictions such as Bermuda and Vermont.

Keywords: Alternative Risk Transfer (ART), captive insurance company, offshore captives, risk management, reinsurance, self-insurance fund

Procedia PDF Downloads 217
379 Evaluation of a Driver Training Intervention for People on the Autism Spectrum: A Multi-Site Randomized Control Trial

Authors: P. Vindin, R. Cordier, N. J. Wilson, H. Lee

Abstract:

Engagement in community-based activities such as education, employment, and social relationships can improve the quality of life for individuals with Autism Spectrum Disorder (ASD). Community mobility is vital to attaining independence for individuals with ASD. Learning to drive and gaining a driver’s license is a critical link to community mobility; however, for individuals with ASD acquiring safe driving skills can be a challenging process. Issues related to anxiety, executive function, and social communication may affect driving behaviours. Driving training and education aimed at addressing barriers faced by learner drivers with ASD can help them improve their driving performance. A multi-site randomized controlled trial (RCT) was conducted to evaluate the effectiveness of an autism-specific driving training intervention for improving the on-road driving performance of learner drivers with ASD. The intervention was delivered via a training manual and interactive website consisting of five modules covering varying driving environments starting with a focus on off-road preparations and progressing through basic to complex driving skill mastery. Seventy-two learner drivers with ASD aged 16 to 35 were randomized using a blinded group allocation procedure into either the intervention or control group. The intervention group received 10 driving lessons with the instructors trained in the use of an autism-specific driving training protocol, whereas the control group received 10 driving lessons as usual. Learner drivers completed a pre- and post-observation drive using a standardized driving route to measure driving performance using the Driving Performance Checklist (DPC). They also completed anxiety, executive function, and social responsiveness measures. The findings showed that there were significant improvements in driving performance for both the intervention (d = 1.02) and the control group (d = 1.15). However, the differences were not significant between groups (p = 0.614) or study sites (p = 0.842). None of the potential moderator variables (anxiety, cognition, social responsiveness, and driving instructor experience) influenced driving performance. This study is an important step toward improving community mobility for individuals with ASD showing that an autism-specific driving training intervention can improve the driving performance of leaner drivers with ASD. It also highlighted the complexity of conducting a multi-site design even when sites were matched according to geography and traffic conditions. Driving instructors also need more and clearer information on how to communicate with learner drivers with restricted verbal expression.

Keywords: autism spectrum disorder, community mobility, driving training, transportation

Procedia PDF Downloads 119
378 Microalgae Technology for Nutraceuticals

Authors: Weixing Tan

Abstract:

Production of nutraceuticals from microalgae—a virtually untapped natural phyto-based source of which there are 200,000 to 1,000,000 species—offers a sustainable and healthy alternative to conventionally sourced nutraceuticals for the market. Microalgae can be grown organically using only natural sunlight, water and nutrients at an extremely fast rate, e.g. 10-100 times more efficiently than crops or trees. However, the commercial success of microalgae products at scale remains limited largely due to the lack of economically viable technologies. There are two major microalgae production systems or technologies currently available: 1) the open system as represented by open pond technology and 2) the closed system such as photobioreactors (PBR). Each carries its own unique features and challenges. Although an open system requires a lower initial capital investment relative to a PBR, it conveys many unavoidable drawbacks; for example, much lower productivity, difficulty in contamination control/cleaning, inconsistent product quality, inconvenience in automation, restriction in location selection, and unsuitability for cold areas – all directly linked to the system openness and flat underground design. On the other hand, a PBR system has characteristics almost entirely opposite to the open system, such as higher initial capital investment, better productivity, better contamination and environmental control, wider suitability in different climates, ease in automation, higher and consistent product quality, higher energy demand (particularly if using artificial lights), and variable operational expenses if not automated. Although closed systems like PBRs are not highly competitive yet in current nutraceutical supply market, technological advances can be made, in particular for the PBR technology, to narrow the gap significantly. One example is a readily scalable P2P Microalgae PBR Technology at Grande Prairie Regional College, Canada, developed over 11 years considering return on investment (ROI) for key production processes. The P2P PBR system is approaching economic viability at a pre-commercial stage due to five ROI-integrated major components. They include: (1) optimum use of free sunlight through attenuation (patented); (2) simple, economical, and chemical-free harvesting (patent ready to file); (3) optimum pH- and nutrient-balanced culture medium (published), (4) reliable water and nutrient recycling system (trade secret); and (5) low-cost automated system design (trade secret). These innovations have allowed P2P Microalgae Technology to increase daily yield to 106 g/m2/day of Chlorella vulgaris, which contains 50% proteins and 2-3% omega-3. Based on the current market prices and scale-up factors, this P2P PBR system presents as a promising microalgae technology for market competitive nutraceutical supply.

Keywords: microalgae technology, nutraceuticals, open pond, photobioreactor PBR, return on investment ROI, technological advances

Procedia PDF Downloads 145
377 Patient Agitation and Violence in Medical-Surgical Settings at BronxCare Hospital, Before and During COVID-19 Pandemic; A Retrospective Chart Review

Authors: Soroush Pakniyat-Jahromi, Jessica Bucciarelli, Souparno Mitra, Neda Motamedi, Ralph Amazan, Samuel Rothman, Jose Tiburcio, Douglas Reich, Vicente Liz

Abstract:

Violence is defined as an act of physical force that is intended to cause harm and may lead to physical and/or psychological damage. Violence toward healthcare workers (HCWs) is more common in psychiatric settings, emergency departments, and nursing homes; however, healthcare workers in medical setting are not spared from such events. Workplace violence has a huge burden in the healthcare industry and has a major impact on the physical and mental wellbeing of staff. The purpose of this study is to compare the prevalence of patient agitation and violence in medical-surgical settings in BronxCare Hospital (BCH) Bronx, New York, one year before and during the COVID-19 pandemic. Data collection occurred between June 2021 and August 2021, while the sampling time was from 2019 to 2021. The data were separated into two separate time categories: pre-COVID-19 (03/2019-03/2020) and COVID-19 (03/2020-03/2021). We created frequency tables for 19 variables. We used a chi-square test to determine a variable's statistical significance. We tested all variables against “restraint type”, determining if a patient was violent or became violent enough to restrain. The restraint types were “chemical”, “physical”, or both. This analysis was also used to determine if there was a statistical difference between the pre-COVID-19 and COVID-19 timeframes. Our data shows that there was an increase in incidents of violence in COVID-19 era (03/2020-03/2021), with total of 194 (62.8%) reported events, compared to pre COVID-19 era (03/2019-03/2020) with 115 (37.2%) events (p: 0.01). Our final analysis, completed using a chi-square test, determined the difference in violence in patients between pre-COVID-19 and COVID-19 era. We then tested the violence marker against restraint type. The result was statistically significant (p: 0.01). This is the first paper to systematically review the prevalence of violence in medical-surgical units in a hospital in New York, pre COVID-19 and during the COVID-19 era. Our data is in line with the global trend of increased prevalence of patient agitation and violence in medical settings during the COVID-19 pandemic. Violence and its management is a challenge in healthcare settings, and the COVID-19 pandemic has brought to bear a complexity of circumstances, which may have increased its incidence. It is important to identify and teach healthcare workers the best preventive approaches in dealing with patient agitation, to decrease the number of restraints in medical settings, and to create a less restrictive environment to deliver care.

Keywords: COVID-19 pandemic, patient agitation, restraints, violence

Procedia PDF Downloads 133
376 A Method to Identify the Critical Delay Factors for Building Maintenance Projects of Institutional Buildings: Case Study of Eastern India

Authors: Shankha Pratim Bhattacharya

Abstract:

In general building repair and renovation projects are minor in nature. It requires less attention as the primary cost involvement is relatively small. Although the building repair and maintenance projects look simple, it involves much complexity during execution. Many of the present research indicate that few uncertain situations are usually linked with maintenance projects. Those may not be read properly in the planning stage of the projects, and finally, lead to time overrun. Building repair and maintenance become essential and periodical after commissioning of the building. In Institutional buildings, the regular maintenance projects also include addition –alteration, modification activities. Increase in the student admission, new departments, and sections, new laboratories and workshops, up gradation of existing laboratories are very common in the institutional buildings in the developing nations like India. The project becomes very critical because it undergoes space problem, architectural design issues, structural modification, etc. One of the prime factors in the institutional building maintenance and modification project is the time constraint. Mostly it required being executed a specific non-work time period. The present research considered only the institutional buildings of the Eastern part of India to analyse the repair and maintenance project delay. A general survey was conducted among the technical institutes to find the causes and corresponding nature of construction delay factors. Five technical institutes are considered in the present study with repair, renovation, modification and extension type of projects. Construction delay factors are categorically subdivided into four groups namely, material, manpower (works), Contract and Site. The survey data are collected for the nature of delay responsible for a specific project and the absolute amount of delay through proposed and actual duration of work. In the first stage of the paper, a relative importance index (RII) is proposed for the delay factors. The occurrence of the delay factors is also judged by its frequency-severity nature. Finally, the delay factors are then rated and linked with the type of work. In the second stage, a regression analysis is executed to establish an empirical relationship between the actual time of a project and the percentage of delay. It also indicates the impact of the factors for delay responsibility. Ultimately, the present paper makes an effort to identify the critical delay factors for the repair and renovation type project in the Eastern Indian Institutional building.

Keywords: delay factor, institutional building, maintenance, relative importance index, regression analysis, repair

Procedia PDF Downloads 241
375 The Position of Islamic Jurisprudence in UAE Private Law: Analytical Study

Authors: Iyad Jadalhaq, Mohammed El Hadi El Maknouzi

Abstract:

The place of Islamic law in the legal system of the UAE is best understood by introducing a differentiation between its role as a formal source of law and its influence as a material source of law. What this differentiation helps clarify is that the corpus of Islamic law constitutes a much deeper influence on adjudication, law-making and the legal profession in the UAE, than it might appear at first sight, by considering its formal position in the division of labor between courts, or legislative lists of sources of law. This paper aims to examine the role of Shariah in the UAE private law system by determining the comprehensiveness of Sharia in the legal system as a whole, and not in a limited way related to it as a source of law according to Article 1 of the Civil Transactions Law. Turning to the role of the Shariah as a formal source of law, it is useful to start from Article 1 of the UAE Civil Code. This provision lays out the formal hierarchy of sources of UAE private law, these being legislation, Islamic law, and custom. Hence, when deciding a civil dispute, a judge should first refer to positive legislation in force in the UAE. Lacking the rule to cover the case before him/her, the judge ought then to refer directly to Islamic law. If the matter lacks regulation in Islamic law, only then may the judge appeal to custom. Accordingly, in connection to civil transactions, Shariah is presented here, formally, as the second source of law. Still, Shariah law addresses many other issues beyond civil transactions, including matters of morals, worship, and belief. However, in Article 1 of the UAE Civil Code, the reference to Islamic law ought to be understood as limited to the rules it lays out for civil transactions. There are four main sets of courts in the judicial systems of the UAE, whose competence is based on whether a dispute touches upon civil and commercial transactions, criminal offenses, personal statuses, or labor relations. This sectorial and multi-tiered organization of courts as a whole constitutes an institutional development compatible with the long-standing affirmation in the Shariah of the legitimacy of the judiciary. Indeed, Islamic law authorizes the governing authorities to organize the judiciary, including by allocating specific types of cases to particular kinds of judges depending on the value of the case, or by assigning judges to a specific place in which they are to exercise their jurisdictional function. In view of this, the contemporary organization of courts in the UAE can be regarded as an organic adaptation, aligned with Shariah rules on the assignment of jurisdictional authority, to the growing complexity of modern society. Therefore, we can conclude to the comprehensive role of Shariah in the entire legal system of the United Arab Emirates, including legislation, a judicial system, institutional, and administrative work.

Keywords: Islamic jurisprudence, Shariah, UAE civil code, UAE private law

Procedia PDF Downloads 108
374 Generation of Knowlege with Self-Learning Methods for Ophthalmic Data

Authors: Klaus Peter Scherer, Daniel Knöll, Constantin Rieder

Abstract:

Problem and Purpose: Intelligent systems are available and helpful to support the human being decision process, especially when complex surgical eye interventions are necessary and must be performed. Normally, such a decision support system consists of a knowledge-based module, which is responsible for the real assistance power, given by an explanation and logical reasoning processes. The interview based acquisition and generation of the complex knowledge itself is very crucial, because there are different correlations between the complex parameters. So, in this project (semi)automated self-learning methods are researched and developed for an enhancement of the quality of such a decision support system. Methods: For ophthalmic data sets of real patients in a hospital, advanced data mining procedures seem to be very helpful. Especially subgroup analysis methods are developed, extended and used to analyze and find out the correlations and conditional dependencies between the structured patient data. After finding causal dependencies, a ranking must be performed for the generation of rule-based representations. For this, anonymous patient data are transformed into a special machine language format. The imported data are used as input for algorithms of conditioned probability methods to calculate the parameter distributions concerning a special given goal parameter. Results: In the field of knowledge discovery advanced methods and applications could be performed to produce operation and patient related correlations. So, new knowledge was generated by finding causal relations between the operational equipment, the medical instances and patient specific history by a dependency ranking process. After transformation in association rules logically based representations were available for the clinical experts to evaluate the new knowledge. The structured data sets take account of about 80 parameters as special characteristic features per patient. For different extended patient groups (100, 300, 500), as well one target value as well multi-target values were set for the subgroup analysis. So the newly generated hypotheses could be interpreted regarding the dependency or independency of patient number. Conclusions: The aim and the advantage of such a semi-automatically self-learning process are the extensions of the knowledge base by finding new parameter correlations. The discovered knowledge is transformed into association rules and serves as rule-based representation of the knowledge in the knowledge base. Even more, than one goal parameter of interest can be considered by the semi-automated learning process. With ranking procedures, the most strong premises and also conjunctive associated conditions can be found to conclude the interested goal parameter. So the knowledge, hidden in structured tables or lists can be extracted as rule-based representation. This is a real assistance power for the communication with the clinical experts.

Keywords: an expert system, knowledge-based support, ophthalmic decision support, self-learning methods

Procedia PDF Downloads 247
373 Development and Application of an Intelligent Masonry Modulation in BIM Tools: Literature Review

Authors: Sara A. Ben Lashihar

Abstract:

The heritage building information modelling (HBIM) of the historical masonry buildings has expanded lately to meet the urgent needs for conservation and structural analysis. The masonry structures are unique features for ancient building architectures worldwide that have special cultural, spiritual, and historical significance. However, there is a research gap regarding the reliability of the HBIM modeling process of these structures. The HBIM modeling process of the masonry structures faces significant challenges due to the inherent complexity and uniqueness of their structural systems. Most of these processes are based on tracing the point clouds and rarely follow documents, archival records, or direct observation. The results of these techniques are highly abstracted models where the accuracy does not exceed LOD 200. The masonry assemblages, especially curved elements such as arches, vaults, and domes, are generally modeled with standard BIM components or in-place models, and the brick textures are graphically input. Hence, future investigation is necessary to establish a methodology to generate automatically parametric masonry components. These components are developed algorithmically according to mathematical and geometric accuracy and the validity of the survey data. The main aim of this paper is to provide a comprehensive review of the state of the art of the existing researches and papers that have been conducted on the HBIM modeling of the masonry structural elements and the latest approaches to achieve parametric models that have both the visual fidelity and high geometric accuracy. The paper reviewed more than 800 articles, proceedings papers, and book chapters focused on "HBIM and Masonry" keywords from 2017 to 2021. The studies were downloaded from well-known, trusted bibliographic databases such as Web of Science, Scopus, Dimensions, and Lens. As a starting point, a scientometric analysis was carried out using VOSViewer software. This software extracts the main keywords in these studies to retrieve the relevant works. It also calculates the strength of the relationships between these keywords. Subsequently, an in-depth qualitative review followed the studies with the highest frequency of occurrence and the strongest links with the topic, according to the VOSViewer's results. The qualitative review focused on the latest approaches and the future suggestions proposed in these researches. The findings of this paper can serve as a valuable reference for researchers, and BIM specialists, to make more accurate and reliable HBIM models for historic masonry buildings.

Keywords: HBIM, masonry, structure, modeling, automatic, approach, parametric

Procedia PDF Downloads 153
372 Hydration of Three-Piece K Peptide Fragments Studied by Means of Fourier Transform Infrared Spectroscopy

Authors: Marcin Stasiulewicz, Sebastian Filipkowski, Aneta Panuszko

Abstract:

Background: The hallmark of neurodegenerative diseases, including Alzheimer's and Parkinson's diseases, is an aggregation of the abnormal forms of peptides and proteins. Water is essential to functioning biomolecules, and it is one of the key factors influencing protein folding and misfolding. However, the hydration studies of proteins are complicated due to the complexity of protein systems. The use of model compounds can facilitate the interpretation of results involving larger systems. Objectives: The goal of the research was to characterize the properties of the hydration water surrounding the two three-residue K peptide fragments INS (Isoleucine - Asparagine - Serine) and NSR (Asparagine - Serine - Arginine). Methods: Fourier-transform infrared spectra of aqueous solutions of the tripeptides were recorded on Nicolet 8700 spectrometer (Thermo Electron Co.) Measurements were carried out at 25°C for varying molality of solute. To remove oscillation couplings from water spectra and, consequently, obtain narrow O-D semi-heavy water bands (HDO), the isotopic dilution method of HDO in H₂O was used. The difference spectra method allowed us to isolate the tripeptide-affected HDO spectrum. Results: The structural and energetic properties of water affected by the tripeptides were compared to the properties of pure water. The shift of the values of the gravity center of bands (related to the mean energy of water hydrogen bonds) towards lower values with respect to the ones corresponding to pure water suggests that the energy of hydrogen bonds between water molecules surrounding tripeptides is higher than in pure water. A comparison of the values of the mean oxygen-oxygen distances in water affected by tripeptides and pure water indicates that water-water hydrogen bonds are shorter in the presence of these tripeptides. The analysis of differences in oxygen-oxygen distance distributions between the tripeptide-affected water and pure water indicates that around the tripeptides, the contribution of water molecules with the mean energy of hydrogen bonds decreases, and simultaneously the contribution of strong hydrogen bonds increases. Conclusions: It was found that hydrogen bonds between water molecules in the hydration sphere of tripeptides are shorter and stronger than in pure water. It means that in the presence of the tested tripeptides, the structure of water is strengthened compared to pure water. Moreover, it has been shown that in the vicinity of the Asparagine - Serine - Arginine, water forms stronger and shorter hydrogen bonds. Acknowledgments: This work was funded by the National Science Centre, Poland (grant 2017/26/D/NZ1/00497).

Keywords: amyloids, K-peptide, hydration, FTIR spectroscopy

Procedia PDF Downloads 169
371 An Experimental Study on the Coupled Heat Source and Heat Sink Effects on Solid Rockets

Authors: Vinayak Malhotra, Samanyu Raina, Ajinkya Vajurkar

Abstract:

Enhancing the rocket efficiency by controlling the external factors in solid rockets motors has been an active area of research for most of the terrestrial and extra-terrestrial system operations. Appreciable work has been done, but the complexity of the problem has prevented thorough understanding due to heterogenous heat and mass transfer. On record, severe issues have surfaced amounting to irreplaceable loss of mankind, instruments, facilities, and huge amount of money being invested every year. The coupled effect of an external heat source and external heat sink is an aspect yet to be articulated in combustion. Better understanding of this coupled phenomenon will induce higher safety standards, efficient missions, reduced hazard risks, with better designing, validation, and testing. The experiment will help in understanding the coupled effect of an external heat sink and heat source on the burning process, contributing in better combustion and fire safety, which are very important for efficient and safer rocket flights and space missions. Safety is the most prevalent issue in rockets, which assisted by poor combustion efficiency, emphasizes research efforts to evolve superior rockets. This signifies real, engineering, scientific, practical, systems and applications. One potential application is Solid Rocket Motors (S.R.M). The study may help in: (i) Understanding the effect on efficiency of core engines due to the primary boosters if considered as source, (ii) Choosing suitable heat sink materials for space missions so as to vary the efficiency of the solid rocket depending on the mission, (iii) Giving an idea about how the preheating of the successive stage due to previous stage acting as a source may affect the mission. The present work governs the temperature (resultant) and thus the heat transfer which is expected to be non-linear because of heterogeneous heat and mass transfer. The study will deepen the understanding of controlled inter-energy conversions and the coupled effect of external source/sink(s) surrounding the burning fuel eventually leading to better combustion thus, better propulsion. The work is motivated by the need to have enhanced fire safety and better rocket efficiency. The specific objective of the work is to understand the coupled effect of external heat source and sink on propellant burning and to investigate the role of key controlling parameters. Results as of now indicate that there exists a singularity in the coupled effect. The dominance of the external heat sink and heat source decides the relative rocket flight in Solid Rocket Motors (S.R.M).

Keywords: coupled effect, heat transfer, sink, solid rocket motors, source

Procedia PDF Downloads 209
370 Voluntary Disclosure Of Sustainability Information In Malaysian Federal-level Statutory Bodies

Authors: Siti Zabedah Saidin, Aidi Ahmi, Azharudin Ali, Wan Norhayati Wan Ahmad

Abstract:

In today's increasingly complex and interconnected world, the concept of sustainability has transcended mere corporate social responsibility, evolving into a fundamental driver of organizational behaviour and disclosure. This content analysis study delves into the Malaysian federal-level statutory bodies’ annual report for the year 2021, aiming to elucidate the extent of sustainability disclosures within the non-financial sections of these reports. The escalating global emphasis on sustainability has prompted organizations to embrace transparency as a means to demonstrate their commitment to environmental, social, and governance (ESG) considerations. Voluntary sustainability disclosure has emerged as a crucial channel through which organizations communicate their efforts, initiatives, and impacts in these areas, thereby fostering trust and accountability with stakeholders. The study aims to identify and examine the types of sustainability information disclosed voluntarily by the federal-level statutory bodies, concentrating on the non-financial sections of the annual reports. To achieve this, the study adopts a simplified disclosure index, a pragmatic tool that quantifies the extent of sustainability reporting in a standardized manner. Using convenience sampling, the study selects a sample of annual reports from the federal-level statutory bodies in Malaysia, as provided on their respective websites. The content analysis is centred on the non-financial sections of these reports, allowing for an in-depth exploration of sustainability disclosures. The findings of the study present the extent to which Malaysian federal-level statutory bodies embrace sustainability reporting. Through thorough content analysis, the study uncovered diverse dimensions of sustainability information, encompassing environmental impact assessments, social engagement endeavours, and governance frameworks. This reveals a deliberate effort by these bodies to encapsulate their holistic organizational contributions and challenges, transcending traditional financial metrics. This research contributes to the existing literature by providing insights into the evolving landscape of sustainability disclosure practices among Malaysian federal-level statutory bodies. The findings underline the proactive nature of these bodies in voluntarily sharing sustainability-related information, reflecting their recognition of the interconnectedness between organizational success and societal well-being. Furthermore, the study underscores the potential influence of regulatory guidelines and societal expectations in shaping the extent and nature of voluntary sustainability disclosures. Organizations are not merely responding to regulatory mandates but are actively aligning with global sustainability goals and stakeholder expectations. As organizations continue to navigate the intricate web of stakeholder expectations and sustainability imperatives, this study enriches the discourse surrounding transparency and sustainability reporting. The analysis emphasizes the important role of non-financial disclosures in portraying a holistic organizational narrative. In an era where stakeholders demand accountability, and the interconnectedness of global challenges necessitates collaborative action, the voluntary disclosure of sustainability information stands as a testament to the commitment of Malaysian federal-level statutory bodies in shaping a more sustainable future.

Keywords: voluntary disclosure, sustainability information, annual report, federal-level statutory body

Procedia PDF Downloads 45
369 Human Immuno-Deficiency Virus Co-Infection with Hepatitis B Virus and Baseline Cd4+ T Cell Count among Patients Attending a Tertiary Care Hospital, Nepal

Authors: Soma Kanta Baral

Abstract:

Background: Since 1981, when the first AIDS case was reported, worldwide, more than 34 million people have been infected with HIV. Almost 95 percent of the people infected with HIV live in developing countries. As HBV & HIV share similar routes of transmission by sexual intercourse or drug use by parenteral injection, co-infection is common. Because of the limited access to healthcare & HIV treatment in developing countries, HIV-infected individuals are present late for care. Enumeration of CD4+ T cell count at the time of diagnosis has been useful to initiate the therapy in HIV infected individuals. The baseline CD4+ T cell count shows high immunological variability among patients. Methods: This prospective study was done in the serology section of the Department of Microbiology over a period of one year from august 2012 to July 2013. A total of 13037 individuals subjected for HIV test were included in the study comprising of 4982 males & 8055 females. Blood sample was collected by vein puncture aseptically with standard operational procedure in clean & dry test-tube. All blood samples were screened for HIV as described by WHO algorithm by Immuno-chromatography rapid kits. Further confirmation was done by biokit ELISA method as per the manufacturer’s guidelines. After informed consent, HIV positive individuals were screened for HBsAg by Immuno-chromatography rapid kits (Hepacard). Further confirmation was done by biokit ELISA method as per the manufacturer’s guidelines. EDTA blood samples were collected from the HIV sero-positive individuals for baseline CD4+ T count. Then, CD4+ T cells count was determined by using FACS Calibur Flow Cytometer (BD). Results: Among 13037 individuals screened for HIV, 104 (0.8%) were found to be infected comprising of 69(66.34%) males & 35 (33.65%) females. The study showed that the high infection was noted in housewives (28.7%), active age group (30.76%), rural area (56.7%) & in heterosexual route (80.9%) of transmission. Out of total HIV infected individuals, distribution of HBV co-infection was found to be 6(5.7%). All co- infected individuals were married, male, above the age of 25 years & heterosexual route of transmission. Baseline CD4+ T cell count of HIV infected patient was found higher (mean CD4+ T cell count; 283cells/cu.mm) than HBV co-infected patients (mean CD4+ T cell count; 91 cells/cu.mm). Majority (77.2%) of HIV infected & all co-infected individuals were presented in our center late (CD4+ T cell count;< 350/cu. mm) for diagnosis and care. Majority of co- infected 4 (80%) were late presented with advanced AIDS stage (CD4+ count; <200/cu.mm). Conclusions: The study showed a high percentage of HIV sero-positive & co- infected individuals. Baseline CD4+ T cell count of majority of HIV infected individuals was found to be low. Hence, more sustained and vigorous awareness campaigns & counseling still need to be done in order to promote early diagnosis and management.

Keywords: HIV/AIDS, HBsAg, co-infection, CD4+

Procedia PDF Downloads 203
368 Numerical Simulation of Waves Interaction with a Free Floating Body by MPS Method

Authors: Guoyu Wang, Meilian Zhang, Chunhui LI, Bing Ren

Abstract:

In recent decades, a variety of floating structures have played a crucial role in ocean and marine engineering, such as ships, offshore platforms, floating breakwaters, fish farms, floating airports, etc. It is common for floating structures to suffer from loadings under waves, and the responses of the structures mounted in marine environments have a significant relation to the wave impacts. The interaction between surface waves and floating structures is one of the important issues in ship or marine structure design to increase performance and efficiency. With the progress of computational fluid dynamics, a number of numerical models based on the NS equations in the time domain have been developed to explore the above problem, such as the finite difference method or the finite volume method. Those traditional numerical simulation techniques for moving bodies are grid-based, which may encounter some difficulties when treating a large free surface deformation and a moving boundary. In these models, the moving structures in a Lagrangian formulation need to be appropriately described in grids, and the special treatment of the moving boundary is inevitable. Nevertheless, in the mesh-based models, the movement of the grid near the structure or the communication between the moving Lagrangian structure and Eulerian meshes will increase the algorithm complexity. Fortunately, these challenges can be avoided by the meshless particle methods. In the present study, a moving particle semi-implicit model is explored for the numerical simulation of fluid–structure interaction with surface flows, especially for coupling of fluid and moving rigid body. The equivalent momentum transfer method is proposed and derived for the coupling of fluid and rigid moving body. The structure is discretized into a group of solid particles, which are assumed as fluid particles involved in solving the NS equation altogether with the surrounding fluid particles. The momentum conservation is ensured by the transfer from those fluid particles to the corresponding solid particles. Then, the position of the solid particles is updated to keep the initial shape of the structure. Using the proposed method, the motions of a free-floating body in regular waves are numerically studied. The wave surface evaluation and the dynamic response of the floating body are presented. There is good agreement when the numerical results, such as the sway, heave, and roll of the floating body, are compared with the experimental and other numerical data. It is demonstrated that the presented MPS model is effective for the numerical simulation of fluid-structure interaction.

Keywords: floating body, fluid structure interaction, MPS, particle method, waves

Procedia PDF Downloads 60
367 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering

Authors: Hamza Benzerrouk, Alexander Nebylov

Abstract:

In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.

Keywords: GNSS, INS, Kalman filtering, ultra tight integration

Procedia PDF Downloads 270
366 Global Winners versus Local Losers: Globalization Identity and Tradition in Spanish Club Football

Authors: Jim O'brien

Abstract:

Contemporary global representation and consumption of La Liga across a plethora of media platform outlets has resulted in significant implications for the historical, political and cultural developments which shaped the development of Spanish club football. This has established and reinforced a hierarchy of a small number of teams belonging to or aspiring to belong to a cluster of global elite clubs seeking to imitate the blueprint of the English Premier League in respect of corporate branding and marketing in order to secure a global fan base through success and exposure in La Liga itself and through the Champions League. The synthesis between globalization, global sport and the status of high profile clubs has created radical change within the folkloric iconography of Spanish football. The main focus of this paper is to critically evaluate the consequences of globalization on the rich tapestry at the core of the game’s distinctive history in Spain. The seminal debate underpinning the study considers whether the divergent aspects of globalization have acted as a malevolent force, eroding tradition, causing financial meltdown and reducing much of the fabric of club football to the status of by standers, or have promoted a renaissance of these traditions, securing their legacies through new fans and audiences. The study draws on extensive sources on the history, politics and culture of Spanish football, in both English and Spanish. It also uses primary and archive material derived from interviews and fieldwork undertaken with scholars, media professionals and club representatives in Spain. The paper has four main themes. Firstly, it contextualizes the key historical, political and cultural forces which shaped the landscape of Spanish football from the late nineteenth century. The seminal notions of region, locality and cultural divergence are pivotal to this discourse. The study then considers the relationship between football, ethnicity and identity as a barometer of continuity and change, suggesting that tradition is being reinvented and re-framed to reflect the shifting demographic and societal patterns within the Spanish state. Following on from this, consideration is given to the paradoxical function of ‘El Clasico’ and the dominant duopoly of the FC Barcelona – Real Madrid axis in both eroding tradition in the global nexus of football’s commodification and in protecting historic political rivalries. To most global consumers of La Liga, the mega- spectacle and hyperbole of ‘El Clasico’ is the essence of Spanish football, with cultural misrepresentation and distortion catapulting the event to the global media audience. Finally, the paper examines La Liga as a sporting phenomenon in which elite clubs, cult managers and galacticos serve as commodities on the altar of mass consumption in football’s global entertainment matrix. These processes accentuate a homogenous mosaic of cultural conformity which obscures local, regional and national identities and paradoxically fuses the global with the local to maintain the distinctive hue of La Liga, as witnessed by the extraordinary successes of Athletico Madrid and FC Eibar in recent seasons.

Keywords: Spanish football, globalization, cultural identity, tradition, folklore

Procedia PDF Downloads 293