Search results for: adaptable business models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9326

Search results for: adaptable business models

6116 The Anti-Globalization Movement, Brexit, Outsourcing and the Current State of Globalization

Authors: Alexis Naranjo

Abstract:

In the current global stage, a new sense and mix feelings against the globalization has started to take shape thanks to events such as Brexit and the 2016 US election. The perceptions towards the globalization have started to focus in a resistance movement called the 'anti-globalization movement'. This paper examines the current global stage vs. leadership decisions in a time when market integrations are not longer seeing as an opportunity for an economic growth buster. The biggest economy in the world the United States of America has started to face a new beginning of something called 'anti-globalization', in the current global stage starting with the United Kingdom to the United States a new strategy to help local economies has started to emerge. A new nationalist movement has started to focus on their local economies which now represents a direct threat to the globalization, trade agreements, wages and free markets. Business leaders of multinationals now in our days face a new dilemma, how to address the feeling that globalization and outsourcing destroy and take away jobs from local economies. The initial perception of the literature and data rebels that companies in Western countries like the US sees many risks associate with outsourcing, however, saving cost associated with outsourcing is greater than the firm’s local reputation. Starting with India as a good example of a supplier of IT developers, analysts and call centers we can start saying that India is an industrialized nation which has not yet secured its spot and title. India has emerged as a powerhouse in the outsource industry, which makes India hold the number one spot in the world to outsource IT services. Thanks to the globalization of economies and markets around the globe that new ideas to increase productivity at a lower cost has been existing for years and has started to offer new ideas and options to businesses in different industries. The economic growth of the information technology (IT) industry in India is an example of the power of the globalization which in the case of India has been tremendous and significant especially in the economic arena. This research paper concentrates in understand the behavior of business leaders: First, how multinational’s leaders will face the new challenges and what actions help them to lead in turbulent times. Second, if outsourcing or withdraw from a market is an option what are the consequences and how you communicate and negotiate from the business leader perspective. Finally, is the perception of leaders focusing on financial results or they have a different goal? To answer these questions, this study focuses on the most recent data available to outline and present the findings of the reason why outsourcing is and option and second, how and why those decisions are made. This research also explores the perception of the phenomenon of outsourcing in many ways and explores how the globalization has contributed to its own questioning.

Keywords: anti-globalization, globalization, leadership, outsourcing

Procedia PDF Downloads 178
6115 Linguistic Summarization of Structured Patent Data

Authors: E. Y. Igde, S. Aydogan, F. E. Boran, D. Akay

Abstract:

Patent data have an increasingly important role in economic growth, innovation, technical advantages and business strategies and even in countries competitions. Analyzing of patent data is crucial since patents cover large part of all technological information of the world. In this paper, we have used the linguistic summarization technique to prove the validity of the hypotheses related to patent data stated in the literature.

Keywords: data mining, fuzzy sets, linguistic summarization, patent data

Procedia PDF Downloads 257
6114 Numerical Simulation of Waves Interaction with a Free Floating Body by MPS Method

Authors: Guoyu Wang, Meilian Zhang, Chunhui LI, Bing Ren

Abstract:

In recent decades, a variety of floating structures have played a crucial role in ocean and marine engineering, such as ships, offshore platforms, floating breakwaters, fish farms, floating airports, etc. It is common for floating structures to suffer from loadings under waves, and the responses of the structures mounted in marine environments have a significant relation to the wave impacts. The interaction between surface waves and floating structures is one of the important issues in ship or marine structure design to increase performance and efficiency. With the progress of computational fluid dynamics, a number of numerical models based on the NS equations in the time domain have been developed to explore the above problem, such as the finite difference method or the finite volume method. Those traditional numerical simulation techniques for moving bodies are grid-based, which may encounter some difficulties when treating a large free surface deformation and a moving boundary. In these models, the moving structures in a Lagrangian formulation need to be appropriately described in grids, and the special treatment of the moving boundary is inevitable. Nevertheless, in the mesh-based models, the movement of the grid near the structure or the communication between the moving Lagrangian structure and Eulerian meshes will increase the algorithm complexity. Fortunately, these challenges can be avoided by the meshless particle methods. In the present study, a moving particle semi-implicit model is explored for the numerical simulation of fluid–structure interaction with surface flows, especially for coupling of fluid and moving rigid body. The equivalent momentum transfer method is proposed and derived for the coupling of fluid and rigid moving body. The structure is discretized into a group of solid particles, which are assumed as fluid particles involved in solving the NS equation altogether with the surrounding fluid particles. The momentum conservation is ensured by the transfer from those fluid particles to the corresponding solid particles. Then, the position of the solid particles is updated to keep the initial shape of the structure. Using the proposed method, the motions of a free-floating body in regular waves are numerically studied. The wave surface evaluation and the dynamic response of the floating body are presented. There is good agreement when the numerical results, such as the sway, heave, and roll of the floating body, are compared with the experimental and other numerical data. It is demonstrated that the presented MPS model is effective for the numerical simulation of fluid-structure interaction.

Keywords: floating body, fluid structure interaction, MPS, particle method, waves

Procedia PDF Downloads 59
6113 Chronic Hypertension, Aquaporin and Hydraulic Conductivity: A Perspective on Pathological Connections

Authors: Chirag Raval, Jimmy Toussaint, Tieuvi Nguyen, Hadi Fadaifard, George Wolberg, Steven Quarfordt, Kung-ming Jan, David S. Rumschitzki

Abstract:

Numerous studies examine aquaporins’ role in osmotic water transport in various systems but virtually none focus on aquaporins’ role in hydrostatically-driven water transport involving mammalian cells save for our laboratory’s recent study of aortic endothelial cells. Here we investigate aquaporin-1 expression and function in the aortic endothelium in two high-renin rat models of hypertension, the spontaneously hypertensive genomically altered Wystar-Kyoto rat variant and Sprague-Dawley rats made hypertensive by two kidney, one clip Goldblatt surgery. We measured aquaporin-1 expression in aortic endothelial cells from whole rat aortas by quantitative immunohistochemistry, and function by measuring the pressure driven hydraulic conductivities of excised rat aortas with both intact and denuded endothelia on the same vessel. We use them to calculate the effective intimal hydraulic conductivity, which is a combination of endothelial and subendothelial components. We observed well-correlated enhancements in aquaporin-1 expression and function in both hypertensive rat models as well as in aortas from normotensive rats whose expression was upregulated by 2h forskolin treatment. Upregulated aquaporin-1 expression and function may be a response to hypertension that critically determines conduit artery vessel wall viability and long-term susceptibility to atherosclerosis. Numerous studies examine aquaporins’ role in osmotic water transport in various systems but virtually none focus on aquaporins’ role in hydrostatically-driven water transport involving mammalian cells save for our laboratory’s recent study of aortic endothelial cells. Here we investigate aquaporin-1 expression and function in the aortic endothelium in two high-renin rat models of hypertension, the spontaneously hypertensive genomically altered Wystar-Kyoto rat variant and Sprague-Dawley rats made hypertensive by two kidney, one clip Goldblatt surgery. We measured aquaporin-1 expression in aortic endothelial cells from whole rat aortas by quantitative immunohistochemistry, and function by measuring the pressure driven hydraulic conductivities of excised rat aortas with both intact and denuded endothelia on the same vessel. We use them to calculate the effective intimal hydraulic conductivity, which is a combination of endothelial and subendothelial components. We observed well-correlated enhancements in aquaporin-1 expression and function in both hypertensive rat models as well as in aortas from normotensive rats whose expression was upregulated by 2h forskolin treatment. Upregulated aquaporin-1 expression and function may be a response to hypertension that critically determines conduit artery vessel wall viability and long-term susceptibility to atherosclerosis.

Keywords: acute hypertension, aquaporin-1, hydraulic conductivity, hydrostatic pressure, aortic endothelial cells, transcellular flow

Procedia PDF Downloads 216
6112 Corporate Water Footprint Assessment: The Case of Tata Steel

Authors: Sujata Mukherjee, Arunavo Mukherjee

Abstract:

Water covers 70 per cent of our planet; however, freshwater is incredibly rare, and scarce has been listed as the highest impact global risk. The problems related to freshwater scarcity multiplies with the human population having more than doubled coupled with climate change, changing water cycles leading to droughts and floods and a rise in water pollution. Businesses, governments, and local communities are constrained by water scarcity and are facing growing challenges to their growth and sustainability. Water foot printing as an indicator for water use was introduced in 2002. Business water footprint measures the total water consumed to produce the goods and services it provides. It is a combination of the water that goes into the production and manufacturing of a product or service and the water used throughout the supply chain, as well as during the use of the product. A case study approach was applied describing the efforts of Tata Steel. It is based on a series of semi-structured in-depth interviews with top executives of the company as well as observation and content analysis of internal and external documents about the company’s efforts in sustainable water management. Tata Steel draws water required for industrial use from surface water sources, primarily perennial rivers and streams, internal reservoirs and water from municipal sources. The focus of the present study was to explore Tata Steel’s engagement in sustainable water management focusing on water foot printing accounting as a tool to account for water use in the steel supply chain at its Jamshedpur plant. The findings enabled the researchers to conclude that no sources of water are adversely affected by the company’s production of steel at Jamshedpur.

Keywords: sustainability, corporate responsibility water management, risk management, business engagement

Procedia PDF Downloads 256
6111 Factors Affecting Entrepreneurial Behavior and Performance of Youth Entrepreneurs in Malaysia

Authors: Mohd Najib Mansor, Nur Syamilah Md. Noor, Abdul Rahim Anuar, Shazida Jan Mohd Khan, Ahmad Zubir Ibrahim, Badariah Hj Din, Abu Sufian Abu Bakar, Kalsom Kayat, Wan Nurmahfuzah Jannah Wan Mansor

Abstract:

This study aimed and focused on the behavior of youth entrepreneurs’ especially entrepreneurial self-efficacy and the performance in micro SMEs in Malaysia. Entrepreneurship development calls for support from various quarters, and mostly the need exists to initiate a youth entrepreneurship culture and drive amongst the youth in the society. Although backed up by the government and non-government organizations, micro-entrepreneurs are still facing challenges which greatly delay their progress, growth and consequently their input towards economic advancement. Micro-entrepreneurs are confronted with unique difficulties such as uncertainty, innovation, and evolution. Reviews on the development of entrepreneurial characteristics such as need for achievement, internal locus of control, risk-taking and innovation and have been recognized as highly associated with entrepreneurial behavior. The data in this study was obtained from the Department of Statistics, Malaysia. A random sampling of 830 respondents was distributed to 14 states that involve of micro-entrepreneurs. The study adopted a quantitative approach whereby a set of questionnaire was used to gather data. Multiple regression analysis was chosen as a method of analysis testing. The result of this study is expected to provide insight into the factor affecting entrepreneurial behavior and performance of youth entrepreneurs in micro SMEs. The finding showed that the Malaysian youth entrepreneurs do not have the entrepreneurial self-efficacy within themselves in order to accomplish greater success in their business venture. The establishment of entrepreneurial schools to allow our youth to be exposed to entrepreneurship from an early age and the development of special training focuses on the creation of business network so that the continuous entrepreneurial culture is crafted.

Keywords: youth entrepreneurs, micro entrepreneurs, entrepreneurial self-efficacy, entrepreneurial performance

Procedia PDF Downloads 282
6110 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model

Authors: Seydou Sinde

Abstract:

The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.

Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression

Procedia PDF Downloads 71
6109 In-Silico Fusion of Bacillus Licheniformis Chitin Deacetylase with Chitin Binding Domains from Chitinases

Authors: Keyur Raval, Steffen Krohn, Bruno Moerschbacher

Abstract:

Chitin, the biopolymer of the N-acetylglucosamine, is the most abundant biopolymer on the planet after cellulose. Industrially, chitin is isolated and purified from the shell residues of shrimps. A deacetylated derivative of chitin i.e. chitosan has more market value and applications owing to it solubility and overall cationic charge compared to the parent polymer. This deacetylation on an industrial scale is performed chemically using alkalis like sodium hydroxide. This reaction not only is hazardous to the environment owing to negative impact on the marine ecosystem. A greener option to this process is the enzymatic process. In nature, the naïve chitin is converted to chitosan by chitin deacetylase (CDA). This enzymatic conversion on the industrial scale is however hampered by the crystallinity of chitin. Thus, this enzymatic action requires the substrate i.e. chitin to be soluble which is technically difficult and an energy consuming process. We in this project wanted to address this shortcoming of CDA. In lieu of this, we have modeled a fusion protein with CDA and an auxiliary protein. The main interest being to increase the accessibility of the enzyme towards crystalline chitin. A similar fusion work with chitinases had improved the catalytic ability towards insoluble chitin. In the first step, suitable partners were searched through the protein data bank (PDB) wherein the domain architecture were sought. The next step was to create the models of the fused product using various in silico techniques. The models were created by MODELLER and evaluated for properties such as the energy or the impairment of the binding sites. A fusion PCR has been designed based on the linker sequences generated by MODELLER and would be tested for its activity towards insoluble chitin.

Keywords: chitin deacetylase, modeling, chitin binding domain, chitinases

Procedia PDF Downloads 234
6108 Stable Diffusion, Context-to-Motion Model to Augmenting Dexterity of Prosthetic Limbs

Authors: André Augusto Ceballos Melo

Abstract:

Design to facilitate the recognition of congruent prosthetic movements, context-to-motion translations guided by image, verbal prompt, users nonverbal communication such as facial expressions, gestures, paralinguistics, scene context, and object recognition contributes to this process though it can also be applied to other tasks, such as walking, Prosthetic limbs as assistive technology through gestures, sound codes, signs, facial, body expressions, and scene context The context-to-motion model is a machine learning approach that is designed to improve the control and dexterity of prosthetic limbs. It works by using sensory input from the prosthetic limb to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. This can help to improve the performance of the prosthetic limb and make it easier for the user to perform a wide range of tasks. There are several key benefits to using the context-to-motion model for prosthetic limb control. First, it can help to improve the naturalness and smoothness of prosthetic limb movements, which can make them more comfortable and easier to use for the user. Second, it can help to improve the accuracy and precision of prosthetic limb movements, which can be particularly useful for tasks that require fine motor control. Finally, the context-to-motion model can be trained using a variety of different sensory inputs, which makes it adaptable to a wide range of prosthetic limb designs and environments. Stable diffusion is a machine learning method that can be used to improve the control and stability of movements in robotic and prosthetic systems. It works by using sensory feedback to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. One key aspect of stable diffusion is that it is designed to be robust to noise and uncertainty in the sensory feedback. This means that it can continue to produce stable, smooth movements even when the sensory data is noisy or unreliable. To implement stable diffusion in a robotic or prosthetic system, it is typically necessary to first collect a dataset of examples of the desired movements. This dataset can then be used to train a machine learning model to predict the appropriate control inputs for a given set of sensory observations. Once the model has been trained, it can be used to control the robotic or prosthetic system in real-time. The model receives sensory input from the system and uses it to generate control signals that drive the motors or actuators responsible for moving the system. Overall, the use of the context-to-motion model has the potential to significantly improve the dexterity and performance of prosthetic limbs, making them more useful and effective for a wide range of users Hand Gesture Body Language Influence Communication to social interaction, offering a possibility for users to maximize their quality of life, social interaction, and gesture communication.

Keywords: stable diffusion, neural interface, smart prosthetic, augmenting

Procedia PDF Downloads 90
6107 Full-Face Hyaluronic Acid Implants Assisted by Artificial Intelligence-Generated Post-treatment 3D Models

Authors: Ciro Cursio, Pio Luigi Cursio, Giulia Cursio, Isabella Chiardi, Luigi Cursio

Abstract:

Introduction: Full-face aesthetic treatments often present a difficult task: since different patients possess different anatomical and tissue characteristics, there is no guarantee that the same treatment will have the same effect on multiple patients; additionally, full-face rejuvenation and beautification treatments require not only a high degree of technical skill but also the ability to choose the right product for each area and a keen artistic eye. Method: We present an artificial intelligence-based algorithm that can generate realistic post-treatment 3D models based on the patient’s requests together with the doctor’s input. These 3-dimensional predictions can be used by the practitioner for two purposes: firstly, they help ensure that the patient and the doctor are completely aligned on the expectations of the treatment; secondly, the doctor can use them as a visual guide, obtaining a natural result that would normally stem from the practitioner's artistic skills. To this end, the algorithm is able to predict injection zones, the type and quantity of hyaluronic acid, the injection depth, and the technique to use. Results: Our innovation consists in providing an objective visual representation of the patient that is helpful in the patient-doctor dialogue. The patient, based on this information, can express her desire to undergo a specific treatment or make changes to the therapeutic plan. In short, the patient becomes an active agent in the choices made before the treatment. Conclusion: We believe that this algorithm will reveal itself as a useful tool in the pre-treatment decision-making process to prevent both the patient and the doctor from making a leap into the dark.

Keywords: hyaluronic acid, fillers, full face, artificial intelligence, 3D

Procedia PDF Downloads 67
6106 Stimulation of Nerve Tissue Differentiation and Development Using Scaffold-Based Cell Culture in Bioreactors

Authors: Simon Grossemy, Peggy P. Y. Chan, Pauline M. Doran

Abstract:

Nerve tissue engineering is the main field of research aimed at finding an alternative to autografts as a treatment for nerve injuries. Scaffolds are used as a support to enhance nerve regeneration. In order to successfully design novel scaffolds and in vitro cell culture systems, a deep understanding of the factors affecting nerve regeneration processes is needed. Physical and biological parameters associated with the culture environment have been identified as potentially influential in nerve cell differentiation, including electrical stimulation, exposure to extracellular-matrix (ECM) proteins, dynamic medium conditions and co-culture with glial cells. The mechanisms involved in driving the cell to differentiation in the presence of these factors are poorly understood; the complexity of each of them raises the possibility that they may strongly influence each other. Some questions that arise in investigating nerve regeneration include: What are the best protein coatings to promote neural cell attachment? Is the scaffold design suitable for providing all the required factors combined? What is the influence of dynamic stimulation on cell viability and differentiation? In order to study these effects, scaffolds adaptable to bioreactor culture conditions were designed to allow electrical stimulation of cells exposed to ECM proteins, all within a dynamic medium environment. Gold coatings were used to make the surface of viscose rayon microfiber scaffolds (VRMS) conductive, and poly-L-lysine (PLL) and laminin (LN) surface coatings were used to mimic the ECM environment and allow the attachment of rat PC12 neural cells. The robustness of the coatings was analyzed by surface resistivity measurements, scanning electron microscope (SEM) observation and immunocytochemistry. Cell attachment to protein coatings of PLL, LN and PLL+LN was studied using DNA quantification with Hoechst. The double coating of PLL+LN was selected based on high levels of PC12 cell attachment and the reported advantages of laminin for neural differentiation. The underlying gold coatings were shown to be biocompatible using cell proliferation and live/dead staining assays. Coatings exhibiting stable properties over time under dynamic fluid conditions were developed; indeed, cell attachment and the conductive power of the scaffolds were maintained over 2 weeks of bioreactor operation. These scaffolds are promising research tools for understanding complex neural cell behavior. They have been used to investigate major factors in the physical culture environment that affect nerve cell viability and differentiation, including electrical stimulation, bioreactor hydrodynamic conditions, and combinations of these parameters. The cell and tissue differentiation response was evaluated using DNA quantification, immunocytochemistry, RT-qPCR and functional analyses.

Keywords: bioreactor, electrical stimulation, nerve differentiation, PC12 cells, scaffold

Procedia PDF Downloads 226
6105 Online Information Seeking: A Review of the Literature in the Health Domain

Authors: Sharifah Sumayyah Engku Alwi, Masrah Azrifah Azmi Murad

Abstract:

The development of the information technology and Internet has been transforming the healthcare industry. The internet is continuously accessed to seek for health information and there are variety of sources, including search engines, health websites, and social networking sites. Providing more and better information on health may empower individuals, however, ensuring a high quality and trusted health information could pose a challenge. Moreover, there is an ever-increasing amount of information available, but they are not necessarily accurate and up to date. Thus, this paper aims to provide an insight of the models and frameworks related to online health information seeking of consumers. It begins by exploring the definition of information behavior and information seeking to provide a better understanding of the concept of information seeking. In this study, critical factors such as performance expectancy, effort expectancy, and social influence will be studied in relation to the value of seeking health information. It also aims to analyze the effect of age, gender, and health status as the moderator on the factors that influence online health information seeking, i.e. trust and information quality. A preliminary survey will be carried out among the health professionals to clarify the research problems which exist in the real world, at the same time producing a conceptual framework. A final survey will be distributed to five states of Malaysia, to solicit the feedback on the framework. Data will be analyzed using SPSS and SmartPLS 3.0 analysis tools. It is hoped that at the end of this study, a novel framework that can improve online health information seeking is developed. Finally, this paper concludes with some suggestions on the models and frameworks that could improve online health information seeking.

Keywords: information behavior, information seeking, online health information, technology acceptance model, the theory of planned behavior, UTAUT

Procedia PDF Downloads 257
6104 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters

Authors: Trevor C. Brown, David J. Miron

Abstract:

Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.

Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics

Procedia PDF Downloads 218
6103 Fine-Scale Modeling the Influencing Factors of Multi-Time Dimensions of Transit Ridership at Station Level: The Study of Guangzhou City

Authors: Dijiang Lyu, Shaoying Li, Zhangzhi Tan, Zhifeng Wu, Feng Gao

Abstract:

Nowadays, China is experiencing rapidly urban rail transit expansions in the world. The purpose of this study is to finely model factors influencing transit ridership at multi-time dimensions within transit stations’ pedestrian catchment area (PCA) in Guangzhou, China. This study was based on multi-sources spatial data, including smart card data, high spatial resolution images, points of interest (POIs), real-estate online data and building height data. Eight multiple linear regression models using backward stepwise method and Geographic Information System (GIS) were created at station-level. According to Chinese code for classification of urban land use and planning standards of development land, residential land-use were divided into three categories: first-level (e.g. villa), second-level (e.g. community) and third-level (e.g. urban villages). Finally, it concluded that: (1) four factors (CBD dummy, number of feeder bus route, number of entrance or exit and the years of station operation) were proved to be positively correlated with transit ridership, but the area of green land-use and water land-use negative correlated instead. (2) The area of education land-use, the second-level and third-level residential land-use were found to be highly connected to the average value of morning peak boarding and evening peak alighting ridership. But the area of commercial land-use and the average height of buildings, were significantly positive associated with the average value of morning peak alighting and evening peak boarding ridership. (3) The area of the second-level residential land-use was rarely correlated with ridership in other regression models. Because private car ownership is still large in Guangzhou now, and some residents living in the community around the stations go to work by transit at peak time, but others are much more willing to drive their own car at non-peak time. The area of the third-level residential land-use, like urban villages, was highly positive correlated with ridership in all models, indicating that residents who live in the third-level residential land-use are the main passenger source of the Guangzhou Metro. (4) The diversity of land-use was found to have a significant impact on the passenger flow on the weekend, but was non-related to weekday. The findings can be useful for station planning, management and policymaking.

Keywords: fine-scale modeling, Guangzhou city, multi-time dimensions, multi-sources spatial data, transit ridership

Procedia PDF Downloads 133
6102 Family Firms Performance: Examining the Impact of Digital and Technological Capabilities using Partial Least Squares Structural Equation Modeling and Necessary Condition Analysis

Authors: Pedro Mota Veiga

Abstract:

This study comprehensively evaluates the repercussions of innovation, digital advancements, and technological capabilities on the operational performance of companies across fifteen European Union countries following the initial wave of the COVID-19 pandemic. Drawing insights from longitudinal data sourced from the 2019 World Bank business surveys and subsequent 2020 World Bank COVID-19 follow-up business surveys, our extensive examination involves a diverse sample of 5763 family businesses. In exploring the relationships between these variables, we adopt a nuanced approach to assess the impact of innovation and digital and technological capabilities on performance. This analysis unfolds along two distinct perspectives: one rooted in necessity and the other insufficiency. The methodological framework employed integrates partial least squares structural equation modeling (PLS-SEM) with condition analysis (NCA), providing a robust foundation for drawing meaningful conclusions. The findings of the study underscore a positive influence on the performance of family firms stemming from both technological capabilities and digital advancements. Furthermore, it is pertinent to highlight the indirect contribution of innovation to enhanced performance, operating through its impact on digital capabilities. This research contributes valuable insights to the broader understanding of how innovation, coupled with digital and technological capabilities, can serve as pivotal factors in shaping the post-COVID-19 landscape for businesses across the European Union. The intricate analysis of family businesses, in particular adds depth to the comprehension of the dynamics at play in diverse economic contexts within the European Union.

Keywords: digital capabilities, technological capabilities, family firms performance, innovation, NCA, PLS-SEM

Procedia PDF Downloads 54
6101 The Path of Cotton-To-Clothing Value Chains to Development: A Mixed Methods Exploration of the Resuscitation of the Cotton-To-Clothing Value Chain in Post

Authors: Emma Van Schie

Abstract:

The purpose of this study is to use mixed methods research to create typologies of the performance of firms in the cotton-to-clothing value chain in Zimbabwe, and to use these typologies to achieve the objective of adding to the small pool of studies on Sub-Saharan African value chains performing in the context of economic liberalisation and achieving development. The uptake of economic liberalisation measures across Sub-Saharan Africa has led to the restructuring of many value chains. While this action has resulted in some African economies positively reintegrating into global commodity chains, it has also been deeply problematic for the development impacts of the majority of others. Over and above this, these nations have been placed at a disadvantage due to the fact that there is little scholarly and policy research on approaches for managing economic liberalisation and value chain development in the unique African context. As such, the central question facing these less successful cases is how they can integrate into the world economy whilst still fostering their development. This paper draws from quantitative questionnaires and qualitative interviews with 28 stakeholders in the cotton-to-clothing value chain in Zimbabwe. This paper examines the performance of firms in the value chain, and the subsequent local socio-economic development impacts that are affected by the revival of the cotton-to-clothing value chain following its collapse in the wake of Zimbabwe’s uptake of economic liberalisation measures. Firstly, the paper finds the relatively undocumented characteristics and structures of firms in the value chain in the post-economic liberalisation era. As well as this, it finds typologies of the status of firms as either being in operation, closed down, or being placed under judicial management and the common characteristics that these typologies hold. The key findings show how a mixture of macro and local level aspects, such as value chain governance and the management structure of a business, leads to the most successful typology that is able to add value to the chain in the context of economic liberalisation, and thus unlock its socioeconomic development potential. These typologies are used in making industry and policy recommendations on achieving this balance between the macro and the local level, as well as recommendations for further academic research for more typologies and models on the case of cotton value chains in Sub-Saharan Africa. In doing so, this study adds to the small collection of academic evidence and policy recommendations for the challenges that African nations face when trying to incorporate into global commodity chains in attempts to benefit from their associated socioeconomic development opportunities.

Keywords: cotton-to-clothing value chain, economic liberalisation, restructuring value chain, typologies of firms, value chain governance, Zimbabwe

Procedia PDF Downloads 156
6100 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples

Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges

Abstract:

Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.

Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review

Procedia PDF Downloads 173
6099 Astragaioside IV Inhibits Type2 Allergic Contact Dermatitis in Mice and the Mechanism Through TLRs-NF-kB Pathway

Authors: Xiao Wei, Dandan Sheng, Xiaoyan Jiang, Lili Gui, Huizhu Wang, Xi Yu, Hailiang Liu, Min Hong

Abstract:

Objective: Mice Type2 allergic contact dermatitis was utilized in this study to explore the effect of AS-IV on Type 2 allergic inflammatory. Methods: The mice were topically sensitized on the shaved abdomens with 1.5% FITC solution on abdominal skin in the day 1 and day 2 and elicited on the right ear with 0.5% FITC solution at day 6. Mice were treated with either AS-IV or normal saline from day 1 to day 5 (induction phase). Auricle swelling was measured 24 h after the elicitation. Ear pathohistological examination was carried out by HE staining. IL-4\IL-13, and IL-9 levels of ear tissue were detected by ELISA. Mice were treated with AS-IV at the initial stage of induction phase, ear tissue was taked at day 3.TSLP level of ear tissue was detected by ELISA and TSLPmRNA\NF-kBmRNA\TLRs(TLR2\TLR3\TLR8\TLR9)mRNA were detected by PCR. Results: AS-IV induction phase evidently inhibited the auricle inflam-mation of the models; pathohistological results indicated that AS-IV induction phase alleviated local edema and angiectasis of mice models and reduced lymphocytic infiltration. AS-IV induction phase markedly decreased IL-4\IL-13, and IL-9 levels in ear tissue. Moreover, at the initial stage of induction pha-se, AS-IV significantly reduced TSLP\TSLPmRNA\NF-kBmRNA\TLR2mRNA\TLR8 mRNA levels in ear tissue. Conclusion: Administration with AS-IV in induction phase could inhibit Type 2 allergic contact dermatitis in mice significantly, and the mechanism may be related with regulating TSLP through TLRs-NF-kB pathway.

Keywords: Astragaioside IV, allergic contact dermatitis, TSLP, interleukin-4, interleukin-13, interleukin-9

Procedia PDF Downloads 421
6098 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing

Authors: Ahmed Elaksher, Islam Omar

Abstract:

Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.

Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition

Procedia PDF Downloads 54
6097 A Sub-Conjunctiva Injection of Rosiglitazone for Anti-Fibrosis Treatment after Glaucoma Filtration Surgery

Authors: Yang Zhao, Feng Zhang, Xuanchu Duan

Abstract:

Trans-differentiation of human Tenon fibroblasts (HTFs) to myo-fibroblasts and fibrosis of episcleral tissue are the most common reasons for the failure of glaucoma filtration surgery, with limited treatment options like antimetabolites which always have side-effects such as leakage of filter bulb, infection, hypotony, and endophthalmitis. Rosiglitazone, a specific thiazolidinedione is a synthetic high-affinity ligand for PPAR-r, which has been used in the treatment of type2 diabetes, and found to have pleiotropic functions against inflammatory response, cell proliferation and tissue fibrosis and to benefit to a variety of diseases in animal myocardium models, steatohepatitis models, etc. Here, in vitro we cultured primary HTFs and stimulated with TGF- β to induced myofibrogenic, then treated cells with Rosiglitazone to assess for fibrogenic response. In vivo, we used rabbit glaucoma model to establish the formation of post- trabeculectomy scarring. Then we administered subconjunctival injection with Rosiglitazone beside the filtering bleb, later protein, mRNA and immunofluorescence of fibrogenic markers are checked, and filtering bleb condition was measured. In vitro, we found Rosiglitazone could suppressed proliferation and migration of fibroblasts through macroautophagy via TGF- β /Smad signaling pathway. In vivo, on postoperative day 28, the mean number of fibroblasts in Rosiglitazone injection group was significantly the lowest and had the least collagen content and connective tissue growth factor. Rosiglitazone effectively controlled human and rabbit fibroblasts in vivo and in vitro. Its subconjunctiiva application may represent an effective, new avenue for the prevention of scarring after glaucoma surgery.

Keywords: fibrosis, glaucoma, macroautophagy, rosiglitazone

Procedia PDF Downloads 257
6096 [Keynote Talk]: Mathematical and Numerical Modelling of the Cardiovascular System: Macroscale, Mesoscale and Microscale Applications

Authors: Aymen Laadhari

Abstract:

The cardiovascular system is centered on the heart and is characterized by a very complex structure with different physical scales in space (e.g. micrometers for erythrocytes and centimeters for organs) and time (e.g. milliseconds for human brain activity and several years for development of some pathologies). The development and numerical implementation of mathematical models of the cardiovascular system is a tremendously challenging topic at the theoretical and computational levels, inducing consequently a growing interest over the past decade. The accurate computational investigations in both healthy and pathological cases of processes related to the functioning of the human cardiovascular system can be of great potential in tackling several problems of clinical relevance and in improving the diagnosis of specific diseases. In this talk, we focus on the specific task of simulating three particular phenomena related to the cardiovascular system on the macroscopic, mesoscopic and microscopic scales, respectively. Namely, we develop numerical methodologies tailored for the simulation of (i) the haemodynamics (i.e., fluid mechanics of blood) in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets, (ii) the hyperelastic anisotropic behaviour of cardiomyocytes and the influence of calcium concentrations on the contraction of single cells, and (iii) the dynamics of red blood cells in microvasculature. For each problem, we present an appropriate fully Eulerian finite element methodology. We report several numerical examples to address in detail the relevance of the mathematical models in terms of physiological meaning and to illustrate the accuracy and efficiency of the numerical methods.

Keywords: finite element method, cardiovascular system, Eulerian framework, haemodynamics, heart valve, cardiomyocyte, red blood cell

Procedia PDF Downloads 239
6095 Ecosystem Model for Environmental Applications

Authors: Cristina Schreiner, Romeo Ciobanu, Marius Pislaru

Abstract:

This paper aims to build a system based on fuzzy models that can be implemented in the assessment of ecological systems, to determine appropriate methods of action for reducing adverse effects on environmental and implicit the population. The model proposed provides new perspective for environmental assessment, and it can be used as a practical instrument for decision-making.

Keywords: ecosystem model, environmental security, fuzzy logic, sustainability of habitable regions

Procedia PDF Downloads 405
6094 Mature Field Rejuvenation Using Hydraulic Fracturing: A Case Study of Tight Mature Oilfield with Reveal Simulator

Authors: Amir Gharavi, Mohamed Hassan, Amjad Shah

Abstract:

The main characteristics of unconventional reservoirs include low-to ultra low permeability and low-to-moderate porosity. As a result, hydrocarbon production from these reservoirs requires different extraction technologies than from conventional resources. An unconventional reservoir must be stimulated to produce hydrocarbons at an acceptable flow rate to recover commercial quantities of hydrocarbons. Permeability for unconventional reservoirs is mostly below 0.1 mD, and reservoirs with permeability above 0.1 mD are generally considered to be conventional. The hydrocarbon held in these formations naturally will not move towards producing wells at economic rates without aid from hydraulic fracturing which is the only technique to assess these tight reservoir productions. Horizontal well with multi-stage fracking is the key technique to maximize stimulated reservoir volume and achieve commercial production. The main objective of this research paper is to investigate development options for a tight mature oilfield. This includes multistage hydraulic fracturing and spacing by building of reservoir models in the Reveal simulator to model potential development options based on sidetracking the existing vertical well. To simulate potential options, reservoir models have been built in the Reveal. An existing Petrel geological model was used to build the static parts of these models. A FBHP limit of 40bars was assumed to take into account pump operating limits and to maintain the reservoir pressure above the bubble point. 300m, 600m and 900m lateral length wells were modelled, in conjunction with 4, 6 and 8 stages of fracs. Simulation results indicate that higher initial recoveries and peak oil rates are obtained with longer well lengths and also with more fracs and spacing. For a 25year forecast, the ultimate recovery ranging from 0.4% to 2.56% for 300m and 1000m laterals respectively. The 900m lateral with 8 fracs 100m spacing gave the highest peak rate of 120m3/day, with the 600m and 300m cases giving initial peak rates of 110m3/day. Similarly, recovery factor for the 900m lateral with 8 fracs and 100m spacing was the highest at 2.65% after 25 years. The corresponding values for the 300m and 600m laterals were 2.37% and 2.42%. Therefore, the study suggests that longer laterals with 8 fracs and 100m spacing provided the optimal recovery, and this design is recommended as the basis for further study.

Keywords: unconventional, resource, hydraulic, fracturing

Procedia PDF Downloads 285
6093 Evidence Theory Based Emergency Multi-Attribute Group Decision-Making: Application in Facility Location Problem

Authors: Bidzina Matsaberidze

Abstract:

It is known that, in emergency situations, multi-attribute group decision-making (MAGDM) models are characterized by insufficient objective data and a lack of time to respond to the task. Evidence theory is an effective tool for describing such incomplete information in decision-making models when the expert and his knowledge are involved in the estimations of the MAGDM parameters. We consider an emergency decision-making model, where expert assessments on humanitarian aid from distribution centers (HADC) are represented in q-rung ortho-pair fuzzy numbers, and the data structure is described within the data body theory. Based on focal probability construction and experts’ evaluations, an objective function-distribution centers’ selection ranking index is constructed. Our approach for solving the constructed bicriteria partitioning problem consists of two phases. In the first phase, based on the covering’s matrix, we generate a matrix, the columns of which allow us to find all possible partitionings of the HADCs with the service centers. Some constraints are also taken into consideration while generating the matrix. In the second phase, based on the matrix and using our exact algorithm, we find the partitionings -allocations of the HADCs to the centers- which correspond to the Pareto-optimal solutions. For an illustration of the obtained results, a numerical example is given for the facility location-selection problem.

Keywords: emergency MAGDM, q-rung orthopair fuzzy sets, evidence theory, HADC, facility location problem, multi-objective combinatorial optimization problem, Pareto-optimal solutions

Procedia PDF Downloads 79
6092 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data

Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa

Abstract:

A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.

Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation

Procedia PDF Downloads 186
6091 Design Thinking and Requirements Engineering in Application Development: Case Studies in Brazil

Authors: V. Prodocimo, A. Malucelli, S. Reinehr

Abstract:

Organizations, driven by business digitization, have in software the main core of value generation and the main channel of communication with their clients. The software, as well as responding to momentary market needs, spans an extensive product family, ranging from mobile applications to multilateral platforms. Thus, the software specification needs to represent solutions focused on consumer problems and market needs. However, requirements engineering, whose approach is strongly linked to technology, becomes deficient and ineffective when the problem is not well defined or when looking for an innovative solution, thus needing a complementary approach. Research has cited the combination of design thinking and requirements engineering, many correlating design thinking as a support technique for the elicitation step, however, little is known about the real benefits and challenges that this combination can bring. From the point of view of the development process, there is little empirical evidence of how Design Thinking interactions with requirements engineering occur. Given this scenario, this paper aims to understand how design thinking practices are applied in each of the requirements engineering stages in software projects. To elucidate these interactions, a qualitative and exploratory research was carried out through the application of the case study method in IT organizations in Brazil that work in the development of software projects. The results indicate that design thinking has aided requirements engineering, both in projects that adopt agile methods and those that adopt the waterfall process, bringing a complementary thought that seeks to build the best software solution design for business problems. It was also possible to conclude that organizations choose to use design thinking not based on a specific software family (e.g. mobile or desktop applications), but given the characteristics of the software projects, such as: vague nature of the problem, complex problems and/or need for innovative solutions.

Keywords: software engineering, requirements engineering, design thinking, innovative solutions

Procedia PDF Downloads 113
6090 A Novel Rapid Well Control Technique Modelled in Computational Fluid Dynamics Software

Authors: Michael Williams

Abstract:

The ability to control a flowing well is of the utmost important. During the kill phase, heavy weight kill mud is circulated around the well. While increasing bottom hole pressure near wellbore formation, the damage is increased. The addition of high density spherical objects has the potential to minimise this near wellbore damage, increase bottom hole pressure and reduce operational time to kill the well. This operational time saving is seen in the rapid deployment of high density spherical objects instead of building high density drilling fluid. The research aims to model the well kill process using a Computational Fluid Dynamics software. A model has been created as a proof of concept to analyse the flow of micron sized spherical objects in the drilling fluid. Initial results show that this new methodology of spherical objects in drilling fluid agrees with traditional stream lines seen in non-particle flow. Additional models have been created to demonstrate that areas of higher flow rate around the bit can lead to increased probability of wash out of formations but do not affect the flow of micron sized spherical objects. Interestingly, areas that experience dimensional changes such as tool joints and various BHA components do not appear at this initial stage to experience increased velocity or create areas of turbulent flow, which could lead to further borehole stability. In conclusion, the initial models of this novel well control methodology have not demonstrated any adverse flow patterns, which would conclude that this model may be viable under field conditions.

Keywords: well control, fluid mechanics, safety, environment

Procedia PDF Downloads 162
6089 Modeling Route Selection Using Real-Time Information and GPS Data

Authors: William Albeiro Alvarez, Gloria Patricia Jaramillo, Ivan Reinaldo Sarmiento

Abstract:

Understanding the behavior of individuals and the different human factors that influence the choice when faced with a complex system such as transportation is one of the most complicated aspects of measuring in the components that constitute the modeling of route choice due to that various behaviors and driving mode directly or indirectly affect the choice. During the last two decades, with the development of information and communications technologies, new data collection techniques have emerged such as GPS, geolocation with mobile phones, apps for choosing the route between origin and destination, individual service transport applications among others, where an interest has been generated to improve discrete choice models when considering the incorporation of these developments as well as psychological factors that affect decision making. This paper implements a discrete choice model that proposes and estimates a hybrid model that integrates route choice models and latent variables based on the observation on the route of a sample of public taxi drivers from the city of Medellín, Colombia in relation to its behavior, personality, socioeconomic characteristics, and driving mode. The set of choice options includes the routes generated by the individual service transport applications versus the driver's choice. The hybrid model consists of measurement equations that relate latent variables with measurement indicators and utilities with choice indicators along with structural equations that link the observable characteristics of drivers with latent variables and explanatory variables with utilities.

Keywords: behavior choice model, human factors, hybrid model, real time data

Procedia PDF Downloads 137
6088 Integrating Knowledge Distillation of Multiple Strategies

Authors: Min Jindong, Wang Mingxia

Abstract:

With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.

Keywords: object detection, knowledge distillation, convolutional network, model compression

Procedia PDF Downloads 265
6087 Evaluation of Ensemble Classifiers for Intrusion Detection

Authors: M. Govindarajan

Abstract:

One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection. 

Keywords: data mining, ensemble, radial basis function, support vector machine, accuracy

Procedia PDF Downloads 235