Search results for: artificial intelligence investment
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3931

Search results for: artificial intelligence investment

151 Production Factor Coefficients Transition through the Lens of State Space Model

Authors: Kanokwan Chancharoenchai

Abstract:

Economic growth can be considered as an important element of countries’ development process. For developing countries, like Thailand, to ensure the continuous growth of the economy, the Thai government usually implements various policies to stimulate economic growth. They may take the form of fiscal, monetary, trade, and other policies. Because of these different aspects, understanding factors relating to economic growth could allow the government to introduce the proper plan for the future economic stimulating scheme. Consequently, this issue has caught interest of not only policymakers but also academics. This study, therefore, investigates explanatory variables for economic growth in Thailand from 2005 to 2017 with a total of 52 quarters. The findings would contribute to the field of economic growth and become helpful information to policymakers. The investigation is estimated throughout the production function with non-linear Cobb-Douglas equation. The rate of growth is indicated by the change of GDP in the natural logarithmic form. The relevant factors included in the estimation cover three traditional means of production and implicit effects, such as human capital, international activity and technological transfer from developed countries. Besides, this investigation takes the internal and external instabilities into account as proxied by the unobserved inflation estimation and the real effective exchange rate (REER) of the Thai baht, respectively. The unobserved inflation series are obtained from the AR(1)-ARCH(1) model, while the unobserved REER of Thai baht is gathered from naive OLS-GARCH(1,1) model. According to empirical results, the AR(|2|) equation which includes seven significant variables, namely capital stock, labor, the imports of capital goods, trade openness, the REER of Thai baht uncertainty, one previous GDP, and the world financial crisis in 2009 dummy, presents the most suitable model. The autoregressive model is assumed constant estimator that would somehow cause the unbias. However, this is not the case of the recursive coefficient model from the state space model that allows the transition of coefficients. With the powerful state space model, it provides the productivity or effect of each significant factor more in detail. The state coefficients are estimated based on the AR(|2|) with the exception of the one previous GDP and the 2009 world financial crisis dummy. The findings shed the light that those factors seem to be stable through time since the occurrence of the world financial crisis together with the political situation in Thailand. These two events could lower the confidence in the Thai economy. Moreover, state coefficients highlight the sluggish rate of machinery replacement and quite low technology of capital goods imported from abroad. The Thai government should apply proactive policies via taxation and specific credit policy to improve technological advancement, for instance. Another interesting evidence is the issue of trade openness which shows the negative transition effect along the sample period. This could be explained by the loss of price competitiveness to imported goods, especially under the widespread implementation of free trade agreement. The Thai government should carefully handle with regulations and the investment incentive policy by focusing on strengthening small and medium enterprises.

Keywords: autoregressive model, economic growth, state space model, Thailand

Procedia PDF Downloads 151
150 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 232
149 Measurement of Fatty Acid Changes in Post-Mortem Belowground Carcass (Sus-scrofa) Decomposition: A Semi-Quantitative Methodology for Determining the Post-Mortem Interval

Authors: Nada R. Abuknesha, John P. Morgan, Andrew J. Searle

Abstract:

Information regarding post-mortem interval (PMI) in criminal investigations is vital to establish a time frame when reconstructing events. PMI is defined as the time period that has elapsed between the occurrence of death and the discovery of the corpse. Adipocere, commonly referred to as ‘grave-wax’, is formed when post-mortem adipose tissue is converted into a solid material that is heavily comprised of fatty acids. Adipocere is of interest to forensic anthropologists, as its formation is able to slow down the decomposition process. Therefore, analysing the changes in the patterns of fatty acids during the early decomposition process may be able to estimate the period of burial, and hence the PMI. The current study concerned the investigation of the fatty acid composition and patterns in buried pig fat tissue. This was in an attempt to determine whether particular patterns of fatty acid composition can be shown to be associated with the duration of the burial, and hence may be used to estimate PMI. The use of adipose tissue from the abdominal region of domestic pigs (Sus-scrofa), was used to model the human decomposition process. 17 x 20cm piece of pork belly was buried in a shallow artificial grave, and weekly samples (n=3) from the buried pig fat tissue were collected over an 11-week period. Marker fatty acids: palmitic (C16:0), oleic (C18:1n-9) and linoleic (C18:2n-6) acid were extracted from the buried pig fat tissue and analysed as fatty acid methyl esters using the gas chromatography system. Levels of the marker fatty acids were quantified from their respective standards. The concentrations of C16:0 (69.2 mg/mL) and C18:1n-9 (44.3 mg/mL) from time zero exhibited significant fluctuations during the burial period. Levels rose (116 and 60.2 mg/mL, respectively) and fell starting from the second week to reach 19.3 and 18.3 mg/mL, respectively at week 6. Levels showed another increase at week 9 (66.3 and 44.1 mg/mL, respectively) followed by gradual decrease at week 10 (20.4 and 18.5 mg/mL, respectively). A sharp increase was observed in the final week (131.2 and 61.1 mg/mL, respectively). Conversely, the levels of C18:2n-6 remained more or less constant throughout the study. In addition to fluctuations in the concentrations, several new fatty acids appeared in the latter weeks. Other fatty acids which were detectable in the time zero sample, were lost in the latter weeks. There are several probable opportunities to utilise fatty acid analysis as a basic technique for approximating PMI: the quantification of marker fatty acids and the detection of selected fatty acids that either disappear or appear during the burial period. This pilot study indicates that this may be a potential semi-quantitative methodology for determining the PMI. Ideally, the analysis of particular fatty acid patterns in the early stages of decomposition could be an additional tool to the already available techniques or methods in improving the overall processes in estimating PMI of a corpse.

Keywords: adipocere, fatty acids, gas chromatography, post-mortem interval

Procedia PDF Downloads 132
148 A Quadratic Model to Early Predict the Blastocyst Stage with a Time Lapse Incubator

Authors: Cecile Edel, Sandrine Giscard D'Estaing, Elsa Labrune, Jacqueline Lornage, Mehdi Benchaib

Abstract:

Introduction: The use of incubator equipped with time-lapse technology in Artificial Reproductive Technology (ART) allows a continuous surveillance. With morphocinetic parameters, algorithms are available to predict the potential outcome of an embryo. However, the different proposed time-lapse algorithms do not take account the missing data, and then some embryos could not be classified. The aim of this work is to construct a predictive model even in the case of missing data. Materials and methods: Patients: A retrospective study was performed, in biology laboratory of reproduction at the hospital ‘Femme Mère Enfant’ (Lyon, France) between 1 May 2013 and 30 April 2015. Embryos (n= 557) obtained from couples (n=108) were cultured in a time-lapse incubator (Embryoscope®, Vitrolife, Goteborg, Sweden). Time-lapse incubator: The morphocinetic parameters obtained during the three first days of embryo life were used to build the predictive model. Predictive model: A quadratic regression was performed between the number of cells and time. N = a. T² + b. T + c. N: number of cells at T time (T in hours). The regression coefficients were calculated with Excel software (Microsoft, Redmond, WA, USA), a program with Visual Basic for Application (VBA) (Microsoft) was written for this purpose. The quadratic equation was used to find a value that allows to predict the blastocyst formation: the synthetize value. The area under the curve (AUC) obtained from the ROC curve was used to appreciate the performance of the regression coefficients and the synthetize value. A cut-off value has been calculated for each regression coefficient and for the synthetize value to obtain two groups where the difference of blastocyst formation rate according to the cut-off values was maximal. The data were analyzed with SPSS (IBM, Il, Chicago, USA). Results: Among the 557 embryos, 79.7% had reached the blastocyst stage. The synthetize value corresponds to the value calculated with time value equal to 99, the highest AUC was then obtained. The AUC for regression coefficient ‘a’ was 0.648 (p < 0.001), 0.363 (p < 0.001) for the regression coefficient ‘b’, 0.633 (p < 0.001) for the regression coefficient ‘c’, and 0.659 (p < 0.001) for the synthetize value. The results are presented as follow: blastocyst formation rate under cut-off value versus blastocyst rate formation above cut-off value. For the regression coefficient ‘a’ the optimum cut-off value was -1.14.10-3 (61.3% versus 84.3%, p < 0.001), 0.26 for the regression coefficient ‘b’ (83.9% versus 63.1%, p < 0.001), -4.4 for the regression coefficient ‘c’ (62.2% versus 83.1%, p < 0.001) and 8.89 for the synthetize value (58.6% versus 85.0%, p < 0.001). Conclusion: This quadratic regression allows to predict the outcome of an embryo even in case of missing data. Three regression coefficients and a synthetize value could represent the identity card of an embryo. ‘a’ regression coefficient represents the acceleration of cells division, ‘b’ regression coefficient represents the speed of cell division. We could hypothesize that ‘c’ regression coefficient could represent the intrinsic potential of an embryo. This intrinsic potential could be dependent from oocyte originating the embryo. These hypotheses should be confirmed by studies analyzing relationship between regression coefficients and ART parameters.

Keywords: ART procedure, blastocyst formation, time-lapse incubator, quadratic model

Procedia PDF Downloads 308
147 Leveraging Power BI for Advanced Geotechnical Data Analysis and Visualization in Mining Projects

Authors: Elaheh Talebi, Fariba Yavari, Lucy Philip, Lesley Town

Abstract:

The mining industry generates vast amounts of data, necessitating robust data management systems and advanced analytics tools to achieve better decision-making processes in the development of mining production and maintaining safety. This paper highlights the advantages of Power BI, a powerful intelligence tool, over traditional Excel-based approaches for effectively managing and harnessing mining data. Power BI enables professionals to connect and integrate multiple data sources, ensuring real-time access to up-to-date information. Its interactive visualizations and dashboards offer an intuitive interface for exploring and analyzing geotechnical data. Advanced analytics is a collection of data analysis techniques to improve decision-making. Leveraging some of the most complex techniques in data science, advanced analytics is used to do everything from detecting data errors and ensuring data accuracy to directing the development of future project phases. However, while Power BI is a robust tool, specific visualizations required by geotechnical engineers may have limitations. This paper studies the capability to use Python or R programming within the Power BI dashboard to enable advanced analytics, additional functionalities, and customized visualizations. This dashboard provides comprehensive tools for analyzing and visualizing key geotechnical data metrics, including spatial representation on maps, field and lab test results, and subsurface rock and soil characteristics. Advanced visualizations like borehole logs and Stereonet were implemented using Python programming within the Power BI dashboard, enhancing the understanding and communication of geotechnical information. Moreover, the dashboard's flexibility allows for the incorporation of additional data and visualizations based on the project scope and available data, such as pit design, rock fall analyses, rock mass characterization, and drone data. This further enhances the dashboard's usefulness in future projects, including operation, development, closure, and rehabilitation phases. Additionally, this helps in minimizing the necessity of utilizing multiple software programs in projects. This geotechnical dashboard in Power BI serves as a user-friendly solution for analyzing, visualizing, and communicating both new and historical geotechnical data, aiding in informed decision-making and efficient project management throughout various project stages. Its ability to generate dynamic reports and share them with clients in a collaborative manner further enhances decision-making processes and facilitates effective communication within geotechnical projects in the mining industry.

Keywords: geotechnical data analysis, power BI, visualization, decision-making, mining industry

Procedia PDF Downloads 92
146 Effects of the Age, Education, and Mental Illness Experience on Depressive Disorder Stigmatization

Authors: Soowon Park, Min-Ji Kim, Jun-Young Lee

Abstract:

Motivation: The stigma of mental illness has been studied in many disciplines, including social psychology, counseling psychology, sociology, psychiatry, public health care, and related areas, because individuals labeled as ‘mentally ill’ are often deprived of their rights and their life opportunities. To understand the factors that deepen the stigma of mental illness, it is important to understand the influencing factors of the stigma. Problem statement: Depression is a common disorder in adults, but the incidence of help-seeking is low. Researchers have believed that this poor help-seeking behavior is related to the stigma of mental illness, which results from low mental health literacy. However, it is uncertain that increasing mental health literacy decreases mental health stigmatization. Furthermore, even though decreasing stigmatization is important, the stigma of mental illness is still a stable and long-lasting phenomenon. Thus, factors other than knowledge about mental disorders have the power to maintain the stigma. Investigating the influencing factors that facilitate the stigma of psychiatric disease could help lower the social stigmatization. Approach: Face-to-face interviews were conducted with a multi-clustering sample. A total of 700 Korean participants (38% male), ranging in age from 18 to 78 (M(SD)age= 48.5(15.7)) answered demographical questions, Korean version of Link’s Perceived Devaluation and Discrimination (PDD) scale for the assessment of social stigmatization against depression, and the Korean version of the WHO-Composite International Diagnostic Interview for the assessment of mental disorders. Multiple-regression was conducted to find the predicting factors of social stigmatization against depression. Ages, sex, years of education, income, living location, and experience of mental illness were used as the predictors. Results: Predictors accounted for 14% of the variance in the stigma of depressive disorders (F(6, 693) = 20.27, p < .001). Among those, only age, years of education, and experience of mental illness significantly predicted social stigmatization against depression. The standardized regression coefficient of age had a negative association with stigmatization (β = -.20, p < .001), but years of education (β = .20, p < .001) and experience of mental illness (β = .08, p < .05) positively predicted depression stigmatization. Conclusions: The present study clearly demonstrates the association between personal factors and depressive disorder stigmatization. Younger age, more education, and self-stigma appeared to increase the stigmatization. Young, highly educated, and mentally ill people tend to reject patients with depressive disorder as friends, teachers, or babysitters; they also tend to think that those patients have lower intelligence and abilities. These results suggest the possibility that people from a high social class, or highly educated people, who have the power to make decisions, help maintain the social stigma against mental illness patients. To increase the awareness that people from high social classes have more stigmatization against depressive disorders will help decrease the biased attitudes against mentally ill patients.

Keywords: depressive disorder stigmatization, age, education, self-stigma

Procedia PDF Downloads 406
145 Evaluating Daylight Performance in an Office Environment in Malaysia, Using Venetian Blind System: Case Study

Authors: Fatemeh Deldarabdolmaleki, Mohamad Fakri Zaky Bin Ja'afar

Abstract:

Having a daylit space together with view results in a pleasant and productive environment for office employees. A daylit space is a space which utilizes daylight as a basic source of illumination to fulfill user’s visual demands and minimizes the electric energy consumption. Malaysian weather is hot and humid all over the year because of its location in the equatorial belt. however, because most of the commercial buildings in Malaysia are air-conditioned, huge glass windows are normally installed in order to keep the physical and visual relation between inside and outside. As a result of climatic situation and mentioned new trend, an ordinary office has huge heat gain, glare, and discomfort for occupants. Balancing occupant’s comfort and energy conservation in a tropical climate is a real challenge. This study concentrates on evaluating a venetian blind system using per pixel analyzing tools based on the suggested cut-out metrics by the literature. Workplace area in a private office room has been selected as a case study. Eight-day measurement experiment was conducted to investigate the effect of different venetian blind angles in an office area under daylight conditions in Serdang, Malaysia. The study goal was to explore daylight comfort of a commercially available venetian blind system, its’ daylight sufficiency and excess (8:00 AM to 5 PM) as well as Glare examination. Recently developed software, analyzing High Dynamic Range Images (HDRI captured by CCD camera), such as radiance based Evalglare and hdrscope help to investigate luminance-based metrics. The main key factors are illuminance and luminance levels, mean and maximum luminance, daylight glare probability (DGP) and luminance ratio of the selected mask regions. The findings show that in most cases, morning session needs artificial lighting in order to achieve daylight comfort. However, in some conditions (e.g. 10° and 40° slat angles) in the second half of day the workplane illuminance level exceeds the maximum of 2000 lx. Generally, a rising trend is discovered toward mean window luminance and the most unpleasant cases occur after 2 P.M. Considering the luminance criteria rating, the uncomfortable conditions occur in the afternoon session. Surprisingly in no blind condition, extreme case of window/task ratio is not common. Studying the daylight glare probability, there is not any DGP value higher than 0.35 in this experiment.

Keywords: daylighting, energy simulation, office environment, Venetian blind

Procedia PDF Downloads 260
144 Heat Vulnerability Index (HVI) Mapping in Extreme Heat Days Coupled with Air Pollution Using Principal Component Analysis (PCA) Technique: A Case Study of Amiens, France

Authors: Aiman Mazhar Qureshi, Ahmed Rachid

Abstract:

Extreme heat events are emerging human environmental health concerns in dense urban areas due to anthropogenic activities. High spatial and temporal resolution heat maps are important for urban heat adaptation and mitigation, helping to indicate hotspots that are required for the attention of city planners. The Heat Vulnerability Index (HVI) is the important approach used by decision-makers and urban planners to identify heat-vulnerable communities and areas that require heat stress mitigation strategies. Amiens is a medium-sized French city, where the average temperature has been increasing since the year 2000 by +1°C. Extreme heat events are recorded in the month of July for the last three consecutive years, 2018, 2019 and 2020. Poor air quality, especially ground-level ozone, has been observed mainly during the same hot period. In this study, we evaluated the HVI in Amiens during extreme heat days recorded last three years (2018,2019,2020). The Principal Component Analysis (PCA) technique is used for fine-scale vulnerability mapping. The main data we considered for this study to develop the HVI model are (a) socio-economic and demographic data; (b) Air pollution; (c) Land use and cover; (d) Elderly heat-illness; (e) socially vulnerable; (f) Remote sensing data (Land surface temperature (LST), mean elevation, NDVI and NDWI). The output maps identified the hot zones through comprehensive GIS analysis. The resultant map shows that high HVI exists in three typical areas: (1) where the population density is quite high and the vegetation cover is small (2) the artificial surfaces (built-in areas) (3) industrial zones that release thermal energy and ground-level ozone while those with low HVI are located in natural landscapes such as rivers and grasslands. The study also illustrates the system theory with a causal diagram after data analysis where anthropogenic activities and air pollution appear in correspondence with extreme heat events in the city. Our suggested index can be a useful tool to guide urban planners and municipalities, decision-makers and public health professionals in targeting areas at high risk of extreme heat and air pollution for future interventions adaptation and mitigation measures.

Keywords: heat vulnerability index, heat mapping, heat health-illness, remote sensing, urban heat mitigation

Procedia PDF Downloads 151
143 Social Marketing – An Integrated and Comprehensive Nutrition Communication Strategy to Improve the Iron Nutriture among Preschool Children

Authors: Manjula Kola, K. Chandralekha

Abstract:

Anaemia is one of the world’s most widespread health problems. Prevalence of anemia in south Asia is among the highest in the world. Iron deficiency anemia accounts for almost 85 percent of all types of anemia in India and affects more than half of the total population. Women of childbearing age particularly pregnant women, infants, preschool children and adolescents are at greatest risk of developing iron deficiency anemia. In India, 74 percent children between 6-35 months of age are anemic. Children between 1-6 years in major cities are found with a high prevalence rate of 64.8 percent. Iron deficiency anemia is not only a public health problem, but also a development problem. Its prevention and reduction must be viewed as investment in human capital that will enhance development and reduce poverty. Ending this hidden hunger in the form of iron deficiency is the most important achievable international health goal. Eliminating the underlying problem is essential to the sustained elimination of the iron deficiency anemia. The intervention programmes toward the sustained elimination need to be broadly based so that interventions become accepted community practices. Hence, intervention strategies need to go well beyond traditional health and nutrition systems and based upon empowering people and communities so that they will be capable of arranging for and sustaining an adequate intake of foods with respect to iron, independent of external support. Such strategies must necessarily be multisectoral and integrate interventions with social communications, evaluation and surveillance. The main objective of the study was to design a community based Nutrition intervention using theoretical framework of social marketing to sustain improvement of iron nutriture among preschool children. In order to carryout the study eight rural communities In Chittoor district of Andhra Pradesh, India were selected. A formative research was carryout for situational analysis and baseline data was generated with regard to demographic and socioeconomic status, dietary intakes, Knowledge, Attitude and Practices of the mothers of preschool children, clinical and hemoglobin status of the target group. Based on the formative research results, the research area was divides into four groups as experimental area I,II,III and control area. A community based, integrated and comprehensive social marketing intervention was designed based on various theories and models of nutrition education/ communication. In Experimental area I, Nutrition intervention using social marketing and a weekly iron folic acid supplementation was given to improve iron nutriture of preschool children. In experimental area II, Social marketing alone was implemented and in experimental area III Iron supplementation alone was given. No intervention was given in control area. The Impact evaluation revealed that among different interventions tested, the integrated social marketing intervention resulted best outcomes. The overall observations of the study state that social marketing, an integrated and functional strategy for nutrition communication to prevent and control iron deficiency. Various theoretical frame works / models for nutrition communication facilitate to design culturally appropriate interventions thus achieved improvements in the knowledge, attitude and practices there by resulting successful impact on nutritional status of the target groups.

Keywords: anemia, iron deficiency, social marketing, theoretical framework

Procedia PDF Downloads 406
142 Unpacking the Spatial Outcomes of Public Transportation in a Developing Country Context: The Case of Johannesburg

Authors: Adedayo B. Adegbaju, Carel B. Schoeman, Ilse M. Schoeman

Abstract:

The unique urban contexts that emanated from the apartheid history of South Africa informed the transport landscape of the City of Johannesburg. Apartheid‘s divisive spatial planning and land use management policies promoted sprawling and separated workers from job opportunities. This was further exacerbated by poor funding of public transport and road designs that encouraged the use of private cars. However, the democratization of the country in 1994 and the hosting of the 2010 FIFA World Cup provided a new impetus to the city’s public transport-oriented urban planning inputs. At the same time, the state’s new approach to policy formulations that entails the provision of public transport as one of the tools to end years of marginalization and inequalities soon began to largely reflect in planning decisions of other spheres of government. The Rea Vaya BRT and the Gautrain were respectively implemented by the municipal and provincial governments to demonstrate strong political will and commitment to the new policy direction. While the Gautrain was implemented to facilitate elite movement within Gauteng and to crowd investments and economic growths around station nodes, the BRT was provided for previously marginalized public transport users to provide a sustainable alternative to the dominant minibus taxi. The aim of this research is to evaluate the spatial impacts of the Gautrain and Rea Vaya BRT on the City of Johannesburg and to inform future outcomes by determining the existing potentials. By using the case study approach with a focus on the BRT and fast rail in a metropolitan context, the triangulation research method, which combines various data collection methods, was used to determine the research outcomes. The use of interviews, questionnaires, field observation, and databases such as REX, Quantec, StatsSA, GCRO observatory, national and provincial household travel surveys, and the quality of life surveys provided the basis for data collection. The research concludes that the Gautrain has demonstrated that viable alternatives to the private car can be provided, with its satisfactory feedbacks from users; while some of its station nodes (Sandton, Rosebank) have shown promises of transit-oriented development, one of the project‘s key objectives. The other stations have been unable to stimulate growth due to reasons like non-implementation of their urban design frameworks and lack of public sector investment required to attract private investors. The Rea Vaya BRT continues to be expanded in spite of both its inability to induce modal change and its low ridership figures. The research identifies factors like the low peak to base ratio, pricing, and the city‘s disjointed urban fabric as some of the reasons for its below-average performance. By drawing from the highlights and limitations, the study recommends that public transport provision should be institutionally integrated across and within spheres of government. Similarly, harmonization of the funding structure, better understanding of users’ needs, and travel patterns, underlined with continuity of policy direction and objectives, will equally promote optimal outcomes.

Keywords: bus rapid transit, Gautrain, Rea Vaya, sustainable transport, spatial and transport planning, transit oriented development

Procedia PDF Downloads 115
141 Light-Controlled Gene Expression in Yeast

Authors: Peter. M. Kusen, Georg Wandrey, Christopher Probst, Dietrich Kohlheyer, Jochen Buchs, Jorg Pietruszkau

Abstract:

Light as a stimulus provides the capability to develop regulation techniques for customizable gene expression. A great advantage is the extremely flexible and accurate dosing that can be performed in a non invasive and sterile manner even for high throughput technologies. Therefore, light regulation in a multiwell microbioreactor system was realized providing the opportunity to control gene expression with outstanding complexity. A light-regulated gene expression system in Saccharomyces cerevisiae was designed applying the strategy of caged compounds. These compounds are photo-labile protected and therefore biologically inactive regulator molecules which can be reactivated by irradiation with certain light conditions. The “caging” of a repressor molecule which is consumed after deprotection was essential to create a flexible expression system. Thereby, gene expression could be temporally repressed by irradiation and subsequent release of the active repressor molecule. Afterwards, the repressor molecule is consumed by the yeast cells leading to reactivation of gene expression. A yeast strain harboring a construct with the corresponding repressible promoter in combination with a fluorescent marker protein was applied in a Photo-BioLector platform which allows individual irradiation as well as online fluorescence and growth detection. This device was used to precisely control the repression duration by adjusting the amount of released repressor via different irradiation times. With the presented screening platform the regulation of complex expression procedures was achieved by combination of several repression/derepression intervals. In particular, a stepwise increase of temporally-constant expression levels was demonstrated which could be used to study concentration dependent effects on cell functions. Also linear expression rates with variable slopes could be shown representing a possible solution for challenging protein productions, whereby excessive production rates lead to misfolding or intoxication. Finally, the very flexible regulation enabled accurate control over the expression induction, although we used a repressible promoter. Summing up, the continuous online regulation of gene expression has the potential to synchronize gene expression levels to optimize metabolic flux, artificial enzyme cascades, growth rates for co cultivations and many other applications addicted to complex expression regulation. The developed light-regulated expression platform represents an innovative screening approach to find optimization potential for production processes.

Keywords: caged-compounds, gene expression regulation, optogenetics, photo-labile protecting group

Procedia PDF Downloads 329
140 Challenges, Practices, and Opportunities of Knowledge Management in Industrial Research Institutes: Lessons Learned from Flanders Make

Authors: Zhenmin Tao, Jasper De Smet, Koen Laurijssen, Jeroen Stuyts, Sonja Sioncke

Abstract:

Today, the quality of knowledge management (KM)become one of the underpinning factors in the success of an organization, as it determines the effectiveness of capitalizing the organization’s knowledge. Overall, KMin an organization consists of five aspects: (knowledge) creation, validation, presentation, distribution, and application. Among others, KM in research institutes is considered as the cornerstone as their activities cover all five aspects. Furthermore, KM in a research institute facilitates the steering committee to envision the future roadmap, identify knowledge gaps, and make decisions on future research directions. Likewise, KMis even more challenging in industrial research institutes. From a technical perspective, technology advancement in the past decades calls for combinations of breadth and depth in expertise that poses challenges in talent acquisition and, therefore, knowledge creation. From a regulatory perspective, the strict intellectual property protection from industry collaborators and/or the contractual agreements made by possible funding authoritiesform extra barriers to knowledge validation, presentation, and distribution. From a management perspective, seamless KM activities are only guaranteed by inter-disciplinary talents that combine technical background knowledge, management skills, and leadership, let alone international vision. From a financial perspective, the long feedback period of new knowledge, together with the massive upfront investment costs and low reusability of the fixed assets, lead to low RORC (return on research capital) that jeopardize KM practice. In this study, we aim to address the challenges, practices, and opportunitiesof KM in Flanders Make – a leading European research institute specialized in the manufacturing industry. In particular, the analyses encompass an internal KM project which involves functionalities ranging from management to technical domain experts. This wide range of functionalities provides comprehensive empirical evidence on the challenges and practices w.r.t.the abovementioned KMaspects. Then, we ground our analysis onto the critical dimensions ofKM–individuals, socio‐organizational processes, and technology. The analyses have three steps: First, we lay the foundation and define the environment of this study by briefing the KM roles played by different functionalities in Flanders Make. Second, we zoom in to the CoreLab MotionS where the KM project is located. In this step, given the technical domains covered by MotionS products, the challenges in KM will be addressed w.r.t. the five KM aspects and three critical dimensions. Third, by detailing the objectives, practices, results, and limitations of the MotionSKMproject, we justify the practices and opportunities derived in the execution ofKMw.r.t. the challenges addressed in the second step. The results of this study are twofold: First, a KM framework that consolidates past knowledge is developed. A library based on this framework can, therefore1) overlook past research output, 2) accelerate ongoing research activities, and 3) envision future research projects. Second, the challenges inKM on both individual (actions) level and socio-organizational level (e.g., interactions between individuals)are identified. By doing so, suggestions and guidelines will be provided in KM in the context of industrial research institute. To this end, the results in this study are reflected towards the findings in existing literature.

Keywords: technical knowledge management framework, industrial research institutes, individual knowledge management, socio-organizational knowledge management.

Procedia PDF Downloads 116
139 Comparison between Conventional Bacterial and Algal-Bacterial Aerobic Granular Sludge Systems in the Treatment of Saline Wastewater

Authors: Philip Semaha, Zhongfang Lei, Ziwen Zhao, Sen Liu, Zhenya Zhang, Kazuya Shimizu

Abstract:

The increasing generation of saline wastewater through various industrial activities is becoming a global concern for activated sludge (AS) based biological treatment which is widely applied in wastewater treatment plants (WWTPs). As for the AS process, an increase in wastewater salinity has negative impact on its overall performance. The advent of conventional aerobic granular sludge (AGS) or bacterial AGS biotechnology has gained much attention because of its superior performance. The development of algal-bacterial AGS could enhance better nutrients removal, potentially reduce aeration cost through symbiotic algae-bacterial activity, and thus, can also reduce overall treatment cost. Nonetheless, the potential of salt stress to decrease biomass growth, microbial activity and nutrient removal exist. Up to the present, little information is available on saline wastewater treatment by algal-bacterial AGS. To the authors’ best knowledge, a comparison of the two AGS systems has not been done to evaluate nutrients removal capacity in the context of salinity increase. This study sought to figure out the impact of salinity on the algal-bacterial AGS system in comparison to bacterial AGS one, contributing to the application of AGS technology in the real world of saline wastewater treatment. In this study, the salt concentrations tested were 0 g/L, 1 g/L, 5 g/L, 10 g/L and 15 g/L of NaCl with 24-hr artificial illuminance of approximately 97.2 µmol m¯²s¯¹, and mature bacterial and algal-bacterial AGS were used for the operation of two identical sequencing batch reactors (SBRs) with a working volume of 0.9 L each, respectively. The results showed that salinity increase caused no apparent change in the color of bacterial AGS; while for algal-bacterial AGS, its color was progressively changed from green to dark green. A consequent increase in granule diameter and fluffiness was observed in the bacterial AGS reactor with the increase of salinity in comparison to a decrease in algal-bacterial AGS diameter. However, nitrite accumulation peaked from 1.0 mg/L and 0.4 mg/L at 1 g/L NaCl in the bacterial and algal-bacterial AGS systems, respectively to 9.8 mg/L in both systems when NaCl concentration varied from 5 g/L to 15 g/L. Almost no ammonia nitrogen was detected in the effluent except at 10 g/L NaCl concentration, where it averaged 4.2 mg/L and 2.4 mg/L, respectively, in the bacterial and algal-bacterial AGS systems. Nutrients removal in the algal-bacterial system was relatively higher than the bacterial AGS in terms of nitrogen and phosphorus removals. Nonetheless, the nutrient removal rate was almost 50% or lower. Results show that algal-bacterial AGS is more adaptable to salinity increase and could be more suitable for saline wastewater treatment. Optimization of operation conditions for algal-bacterial AGS system would be important to ensure its stably high efficiency in practice.

Keywords: algal-bacterial aerobic granular sludge, bacterial aerobic granular sludge, Nutrients removal, saline wastewater, sequencing batch reactor

Procedia PDF Downloads 148
138 Religious Capital and Entrepreneurial Behavior in Small Businesses: The Importance of Entrepreneurial Creativity

Authors: Waleed Omri

Abstract:

With the growth of the small business sector in emerging markets, developing a better understanding of what drives 'day-to-day' entrepreneurial activities has become an important issue for academicians and practitioners. Innovation, as an entrepreneurial behavior, revolves around individuals who creatively engage in new organizational efforts. In a similar vein, the innovation behaviors and processes at the organizational member level are central to any corporate entrepreneurship strategy. Despite the broadly acknowledged importance of entrepreneurship and innovation at the individual level in the establishment of successful ventures, the literature lacks evidence on how entrepreneurs can effectively harness their skills and knowledge in the workplace. The existing literature illustrates that religion can impact the day-to-day work behavior of entrepreneurs, managers, and employees. Religious beliefs and practices could affect daily entrepreneurial activities by fostering mental abilities and traits such as creativity, intelligence, and self-efficacy. In the present study, we define religious capital as a set of personal and intangible resources, skills, and competencies that emanate from an individual’s religious values, beliefs, practices, and experiences and may be used to increase the quality of economic activities. Religious beliefs and practices give individuals a religious satisfaction, which can lead them to perform better in the workplace. In addition, religious ethics and practices have been linked to various positive employee outcomes in terms of organizational change, job satisfaction, and entrepreneurial intensity. As investigations of their consequences beyond direct task performance are still scarce, we explore if religious capital plays a role in entrepreneurs’ innovative behavior. In sum, this study explores the determinants of individual entrepreneurial behavior by investigating the relationship between religious capital and entrepreneurs’ innovative behavior in the context of small businesses. To further explain and clarify the religious capital-innovative behavior link, the present study proposes a model to examine the mediating role of entrepreneurial creativity. We use both Islamic work ethics (IWE) and Islamic religious practices (IRP) to measure Islamic religious capital. We use structural equation modeling with a robust maximum likelihood estimation to analyze data gathered from 289 Tunisian small businesses and to explore the relationships among the above-described variables. In line with the theory of planned behavior, only religious work ethics are found to increase the innovative behavior of small businesses’ owner-managers. Our findings also clearly demonstrate that the connection between religious capital-related variables and innovative behavior is better understood if the influence of entrepreneurial creativity, as a mediating variable of the aforementioned relationship, is taken into account. By incorporating both religious capital and entrepreneurial creativity into the innovative behavior analysis, this study provides several important practical implications for promoting innovation process in small businesses.

Keywords: entrepreneurial behavior, small business, religion, creativity

Procedia PDF Downloads 245
137 Intelligent Indoor Localization Using WLAN Fingerprinting

Authors: Gideon C. Joseph

Abstract:

The ability to localize mobile devices is quite important, as some applications may require location information of these devices to operate or deliver better services to the users. Although there are several ways of acquiring location data of mobile devices, the WLAN fingerprinting approach has been considered in this work. This approach uses the Received Signal Strength Indicator (RSSI) measurement as a function of the position of the mobile device. RSSI is a quantitative technique of describing the radio frequency power carried by a signal. RSSI may be used to determine RF link quality and is very useful in dense traffic scenarios where interference is of major concern, for example, indoor environments. This research aims to design a system that can predict the location of a mobile device, when supplied with the mobile’s RSSIs. The developed system takes as input the RSSIs relating to the mobile device, and outputs parameters that describe the location of the device such as the longitude, latitude, floor, and building. The relationship between the Received Signal Strengths (RSSs) of mobile devices and their corresponding locations is meant to be modelled; hence, subsequent locations of mobile devices can be predicted using the developed model. It is obvious that describing mathematical relationships between the RSSIs measurements and localization parameters is one option to modelling the problem, but the complexity of such an approach is a serious turn-off. In contrast, we propose an intelligent system that can learn the mapping of such RSSIs measurements to the localization parameters to be predicted. The system is capable of upgrading its performance as more experiential knowledge is acquired. The most appealing consideration to using such a system for this task is that complicated mathematical analysis and theoretical frameworks are excluded or not needed; the intelligent system on its own learns the underlying relationship in the supplied data (RSSI levels) that corresponds to the localization parameters. These localization parameters to be predicted are of two different tasks: Longitude and latitude of mobile devices are real values (regression problem), while the floor and building of the mobile devices are of integer values or categorical (classification problem). This research work presents artificial neural network based intelligent systems to model the relationship between the RSSIs predictors and the mobile device localization parameters. The designed systems were trained and validated on the collected WLAN fingerprint database. The trained networks were then tested with another supplied database to obtain the performance of trained systems on achieved Mean Absolute Error (MAE) and error rates for the regression and classification tasks involved therein.

Keywords: indoor localization, WLAN fingerprinting, neural networks, classification, regression

Procedia PDF Downloads 349
136 Industrial Production of the Saudi Future Dwelling: A Saudi Volumetric Solution for Single Family Homes, Leveraging Industry 4.0 with Scalable Automation, Hybrid Structural Insulated Panels Technology and Local Materials

Authors: Bandar Alkahlan

Abstract:

The King Abdulaziz City for Science and Technology (KACST) created the Saudi Future Dwelling (SFD) initiative to identify, localize and commercialize a scalable home manufacturing technology suited to deployment across the Kingdom of Saudi Arabia (KSA). This paper outlines the journey, the creation of the international project delivery team, the product design, the selection of the process technologies, and the outcomes. A target was set to remove 85% of the construction and finishing processes from the building site as these activities could be more efficiently completed in a factory environment. Therefore, integral to the SFD initiative is the successful industrialization of the home building process using appropriate technologies, automation, robotics, and manufacturing logistics. The technologies proposed for the SFD housing system are designed to be energy efficient, economical, fit for purpose from a Saudi cultural perspective, and will minimize the use of concrete, relying mainly on locally available Saudi natural materials derived from the local resource industries. To this end, the building structure is comprised of a hybrid system of structural insulated panels (SIP), combined with a light gauge steel framework manufactured in a large format panel system. The paper traces the investigative process and steps completed by the project team during the selection process. As part of the SFD Project, a pathway was mapped out to include a proof-of-concept prototype housing module and the set-up and commissioning of a lab-factory complete with all production machinery and equipment necessary to simulate a full-scale production environment. The prototype housing module was used to validate and inform current and future product design as well as manufacturing process decisions. A description of the prototype design and manufacture is outlined along with valuable learning derived from the build and how these results were used to enhance the SFD project. The industrial engineering concepts and lab-factory detailed design and layout are described in the paper, along with the shop floor I.T. management strategy. Special attention was paid to showcase all technologies within the lab-factory as part of the engagement strategy with private investors to leverage the SFD project with large scale factories throughout the Kingdom. A detailed analysis is included in the process surrounding the design, specification, and procurement of the manufacturing machinery, equipment, and logistical manipulators required to produce the SFD housing modules. The manufacturing machinery was comprised of a combination of standardized and bespoke equipment from a wide range of international suppliers. The paper describes the selection process, pre-ordering trials and studies, and, in some cases, the requirement for additional research and development by the equipment suppliers in order to achieve the SFD objectives. A set of conclusions is drawn describing the results achieved thus far, along with a list of recommended ongoing operational tests, enhancements, research, and development aimed at achieving full-scale engagement with private sector investment and roll-out of the SFD project across the Kingdom.

Keywords: automation, dwelling, manufacturing, product design

Procedia PDF Downloads 122
135 Improving the Utility of Social Media in Pharmacovigilance: A Mixed Methods Study

Authors: Amber Dhoot, Tarush Gupta, Andrea Gurr, William Jenkins, Sandro Pietrunti, Alexis Tang

Abstract:

Background: The COVID-19 pandemic has driven pharmacovigilance towards a new paradigm. Nowadays, more people than ever before are recognising and reporting adverse reactions from medications, treatments, and vaccines. In the modern era, with over 3.8 billion users, social media has become the most accessible medium for people to voice their opinions and so provides an opportunity to engage with more patient-centric and accessible pharmacovigilance. However, the pharmaceutical industry has been slow to incorporate social media into its modern pharmacovigilance strategy. This project aims to make social media a more effective tool in pharmacovigilance, and so reduce drug costs, improve drug safety and improve patient outcomes. This will be achieved by firstly uncovering and categorising the barriers facing the widespread adoption of social media in pharmacovigilance. Following this, the potential opportunities of social media will be explored. We will then propose realistic, practical recommendations to make social media a more effective tool for pharmacovigilance. Methodology: A comprehensive systematic literature review was conducted to produce a categorised summary of these barriers. This was followed by conducting 11 semi-structured interviews with pharmacovigilance experts to confirm the literature review findings whilst also exploring the unpublished and real-life challenges faced by those in the pharmaceutical industry. Finally, a survey of the general public (n = 112) ascertained public knowledge, perception, and opinion regarding the use of their social media data for pharmacovigilance purposes. This project stands out by offering perspectives from the public and pharmaceutical industry that fill the research gaps identified in the literature review. Results: Our results gave rise to several key analysis points. Firstly, inadequacies of current Natural Language Processing algorithms hinder effective pharmacovigilance data extraction from social media, and where data extraction is possible, there are significant questions over its quality. Social media also contains a variety of biases towards common drugs, mild adverse drug reactions, and the younger generation. Additionally, outdated regulations for social media pharmacovigilance do not align with new, modern General Data Protection Regulations (GDPR), creating ethical ambiguity about data privacy and level of access. This leads to an underlying mindset of avoidance within the pharmaceutical industry, as firms are disincentivised by the legal, financial, and reputational risks associated with breaking ambiguous regulations. Conclusion: Our project uncovered several barriers that prevent effective pharmacovigilance on social media. As such, social media should be used to complement traditional sources of pharmacovigilance rather than as a sole source of pharmacovigilance data. However, this project adds further value by proposing five practical recommendations that improve the effectiveness of social media pharmacovigilance. These include: prioritising health-orientated social media; improving technical capabilities through investment and strategic partnerships; setting clear regulatory guidelines using multi-stakeholder processes; creating an adverse drug reaction reporting interface inbuilt into social media platforms; and, finally, developing educational campaigns to raise awareness of the use of social media in pharmacovigilance. Implementation of these recommendations would speed up the efficient, ethical, and systematic adoption of social media in pharmacovigilance.

Keywords: adverse drug reaction, drug safety, pharmacovigilance, social media

Procedia PDF Downloads 83
134 Redefining Intellectual Humility in Indian Context: An Experimental Investigation

Authors: Jayashree And Gajjam

Abstract:

Intellectual humility (IH) is defined as a virtuous mean between intellectual arrogance and intellectual self-diffidence by the ‘Doxastic Account of IH’ studied, researched and developed by western scholars not earlier than 2015 at the University of Edinburgh. Ancient Indian philosophical texts or the Upanisads written in the Sanskrit language during the later Vedic period (circa 600-300 BCE) have long addressed the virtue of being humble in several stories and narratives. The current research paper questions and revisits these character traits in an Indian context following an experimental method. Based on the subjective reports of more than 400 Indian teenagers and adults, it argues that while a few traits of IH (such as trustworthiness, respectfulness, intelligence, politeness, etc.) are panhuman and pancultural, a few are not. Some attributes of IH (such as proper pride, open-mindedness, awareness of own strength, etc.) may be taken for arrogance by the Indian population, while other qualities of Intellectual Diffidence such as agreeableness, surrendering can be regarded as the characteristic of IH. The paper then gives the reasoning for this discrepancy that can be traced back to the ancient Indian (Upaniṣadic) teachings that are still prevalent in many Indian families and still anchor their views on IH. The name Upanisad itself means ‘sitting down near’ (to the Guru to gain the Supreme knowledge of the Self and the Universe and setting to rest ignorance) which is equivalent to the three traits among the BIG SEVEN characterized as IH by the western scholars viz. ‘being a good listener’, ‘curious to learn’, and ‘respect to other’s opinion’. The story of Satyakama Jabala (Chandogya Upanisad 4.4-8) who seeks the truth for several years even from the bull, the fire, the swan and waterfowl, suggests nothing but the ‘need for cognition’ or ‘desire for knowledge’. Nachiketa (Katha Upanisad), a boy with a pure mind and heart, follows his father’s words and offers himself to Yama (the God of Death) where after waiting for Yama for three days and nights, he seeks the knowledge of the mysteries of life and death. Although the main aim of these Upaniṣadic stories is to give the knowledge of life and death, the Supreme reality which can be identical with traits such as ‘curious to learn’, one cannot deny that they have a lot more to offer than mere information about true knowledge e.g., ‘politeness’, ‘good listener’, ‘awareness of own limitations’, etc. The possible future scope of this research includes (1) finding other socio-cultural factors that affect the ideas on IH such as age, gender, caste, type of education, highest qualification, place of residence and source of income, etc. which may be predominant in current Indian society despite our great teachings of the Upaniṣads, and (2) to devise different measures to impart IH in Indian children, teenagers, and younger adults for the harmonious future. The current experimental research can be considered as the first step towards these goals.

Keywords: ethics and virtue epistemology, Indian philosophy, intellectual humility, upaniṣadic texts in ancient India

Procedia PDF Downloads 93
133 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model

Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova

Abstract:

The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.

Keywords: bacteriocins, cross-contamination, mathematical model, temperature

Procedia PDF Downloads 145
132 The Strategic Importance of Technology in the International Production: Beyond the Global Value Chains Approach

Authors: Marcelo Pereira Introini

Abstract:

The global value chains (GVC) approach contributes to a better understanding of the international production organization amid globalization’s second unbundling from the 1970s on. Mainly due to the tools that help to understand the importance of critical competences, technological capabilities, and functions performed by each player, GVC research flourished in recent years, rooted in discussing the possibilities of integration and repositioning along regional and global value chains. Regarding this context, part of the literature endorsed a more optimistic view that engaging in fragmented production networks could represent learning opportunities for developing countries’ firms, since the relationship with transnational corporations could allow them build skills and competences. Increasing recognition that GVCs are based on asymmetric power relations provided another sight about benefits, costs, and development possibilities though. Once leading companies tend to restrict the replication of their technologies and capabilities by their suppliers, alternative strategies beyond the functional specialization, seen as a way to integrate value chains, began to be broadly highlighted. This paper organizes a coherent narrative about the shortcomings of the GVC analytical framework, while recognizing its multidimensional contributions and recent developments. We adopt two different and complementary perspectives to explore the idea of integration in the international production. On one hand, we emphasize obstacles beyond production components, analyzing the role played by intangible assets and intellectual property regimes. On the other hand, we consider the importance of domestic production and innovation systems for technological development. In order to provide a deeper understanding of the restrictions on technological learning of developing countries’ firms, we firstly build from the notion of intellectual monopoly to analyze how flagship companies can prevent subordinated firms from improving their positions in fragmented production networks. Based on intellectual property protection regimes we discuss the increasing asymmetries between these players and the decreasing access of part of them to strategic intangible assets. Second, we debate the role of productive-technological ecosystems and of interactive and systemic technological development processes, as concepts of the Innovation Systems approach. Supporting the idea that not only endogenous advantages are important for international competition of developing countries’ firms, but also that the building of these advantages itself can be a source of technological learning, we focus on local efforts as a crucial element, which is not replaceable for technology imported from abroad. Finally, the paper contributes to the discussion about technological development as a two-dimensional dynamic. If GVC analysis tends to underline a company-based perspective, stressing the learning opportunities associated to GVC integration, historical involvement of national States brings up the debate about technology as a central aspect of interstate disputes. In this sense, technology is seen as part of military modernization before being also used in civil contexts, what presupposes its role for national security and productive autonomy strategies. From this outlook, it is important to consider it as an asset that, incorporated in sophisticated machinery, can be the target of state policies besides the protection provided by intellectual property regimes, such as in export controls and inward-investment restrictions.

Keywords: global value chains, innovation systems, intellectual monopoly, technological development

Procedia PDF Downloads 82
131 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids

Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje

Abstract:

Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.

Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise

Procedia PDF Downloads 128
130 The Healthcare Costs of BMI-Defined Obesity among Adults Who Have Undergone a Medical Procedure in Alberta, Canada

Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach

Abstract:

Obesity is associated with significant personal impacts on health and has a substantial economic burden on payers due to increased healthcare use. A contemporary estimate of the healthcare costs associated with obesity at the population level are lacking. This evidence may provide further rationale for weight management strategies. Methods: Adults who underwent a medical procedure between 2012 and 2019 in Alberta, Canada were categorized into the investigational cohort (had body mass index [BMI]-defined class 2 or 3 obesity based on a procedure-associated code) and the control cohort (did not have the BMI procedure-associated code); those who had bariatric surgery were excluded. Characteristics were presented and healthcare costs ($CDN) determined over a 1-year observation period (2019/2020). Logistic regression and a generalized linear model with log link and gamma distribution were used to assess total healthcare costs (comprised of hospitalizations, emergency department visits, ambulatory care visits, physician visits, and outpatient prescription drugs); potential confounders included age, sex, region of residence, and whether the medical procedure was performed within 6-months before the observation period in the partial adjustment, and also the type of procedure performed, socioeconomic status, Charlson Comorbidity Index (CCI), and seven obesity-related health conditions in the full adjustment. Cost ratios and estimated cost differences with 95% confidence intervals (CI) were reported; incremental cost differences within the adjusted models represent referent cases. Results: The investigational cohort (n=220,190) was older (mean age: 53 standard deviation [SD]±17 vs 50 SD±17 years), had more females (71% vs 57%), lived in rural areas to a greater extent (20% vs 14%), experienced a higher overall burden of disease (CCI: 0.6 SD±1.3 vs 0.3 SD±0.9), and were less socioeconomically well-off (material/social deprivation was lower [14%/14%] in the most well-off quintile vs 20%/19%) compared with controls (n=1,955,548). Unadjusted total healthcare costs were estimated to be 1.77-times (95% CI: 1.76, 1.78) higher in the investigational versus control cohort; each healthcare resource contributed to the higher cost ratio. After adjusting for potential confounders, the total healthcare cost ratio decreased, but remained higher in the investigational versus control cohort (partial adjustment: 1.57 [95% CI: 1.57, 1.58]; full adjustment: 1.21 [95% CI: 1.20, 1.21]); each healthcare resource contributed to the higher cost ratio. Among urban-dwelling 50-year old females who previously had non-operative procedures, no procedures performed within 6-months before the observation period, a social deprivation index score of 3, a CCI score of 0.32, and no history of select obesity-related health conditions, the predicted cost difference between those living with and without obesity was $386 (95% CI: $376, $397). Conclusions: If these findings hold for the Canadian population, one would expect an estimated additional $3.0 billion per year in healthcare costs nationally related to BMI-defined obesity (based on an adult obesity rate of 26% and an estimated annual incremental cost of $386 [21%]); incremental costs are higher when obesity-related health conditions are not adjusted for. Results of this study provide additional rationale for investment in interventions that are effective in preventing and treating obesity and its complications.

Keywords: administrative data, body mass index-defined obesity, healthcare cost, real world evidence

Procedia PDF Downloads 109
129 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4

Authors: Ryan A. Black, Stacey A. McCaffrey

Abstract:

Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.

Keywords: instrument development, item response theory, latent trait theory, psychometrics

Procedia PDF Downloads 358
128 Unpacking the Rise of Social Entrepreneurship over Sustainable Entrepreneurship among Sri Lankan Exporters in SMEs Sector: A Case Study in Sri Lanka

Authors: Amarasinghe Shashikala, Pramudika Hansini, Fernando Tajan, Rathnayake Piyumi

Abstract:

This study investigates the prominence of the social entrepreneurship (SE) model over the sustainable entrepreneurship model among Sri Lankan exporters in the small and medium enterprise (SME) sector. The primary objective of this study is to explore how the unique socio-economic contextual nuances of the country influence this behavior. The study employs a multiple-case study approach, collecting data from thirteen SEs in the SME sector. The findings reveal a significant alignment between SE and the lifestyle of the people in Sri Lanka, attributed largely to its deep-rooted religious setting and cultural norms. A crucial factor driving the prominence of SE is the predominantly labor-intensive nature of production processes within the exporters of the SME sector. These processes inherently lend themselves to SE, providing employment opportunities and fostering community engagement. Further, SE initiatives substantially resonate with community-centric practices, making them more appealing and accessible to the local populace. In contrast, the findings highlight a dilemma between cost-effectiveness and sustainable entrepreneurship. Transitioning to sustainable export products and production processes is demanded by foreign buyers and acknowledged as essential for environmental stewardship, which often requires capital-intensive makeovers. This investment inevitably raises the overall cost of the export product, making it less competitive in the global market. Interestingly, the study notes a disparity between international demand for sustainable products and the willingness of buyers to pay a premium for them. Despite the growing global preference for eco-friendly options, the findings suggest that the additional costs associated with sustainable entrepreneurship are not adequately reflected in the purchasing behavior of international buyers. The abundance of natural resources coupled with a minimal occurrence of natural catastrophes renders exporters less environmentally sensitive. The absence of robust policy support for environmental preservation exacerbates this inclination. Consequently, exporters exhibit a diminished motivation to incorporate environmental sustainability into their business decisions. Instead, attention is redirected towards factors such as the local population's minimum standards of living, prevalent social issues, governmental corruption and inefficiency, and rural poverty. These elements impel exporters to prioritize social well-being when making business decisions. Notably, the emphasis on social impact, rather than environmental impact, appears to be a generational trend, perpetuating a focus on societal aspects in the realm of business. In conclusion, the manifestation of entrepreneurial behavior within developing nations is notably contingent upon contextual nuances. This investigation contributes to a deeper understanding of the dynamics shaping the prevalence of SE over sustainable entrepreneurship among Sri Lankan exporters in the SME sector. The insights generated have implications for policymakers, industry stakeholders, and academics seeking to navigate the delicate balance between socio-cultural values, economic feasibility, and environmental sustainability in the pursuit of responsible business practices within the export sector.

Keywords: small and medium enterprises, social entrepreneurship, Sri Lanka, sustainable entrepreneurship

Procedia PDF Downloads 77
127 Improving Working Memory in School Children through Chess Training

Authors: Veena Easvaradoss, Ebenezer Joseph, Sumathi Chandrasekaran, Sweta Jain, Aparna Anna Mathai, Senta Christy

Abstract:

Working memory refers to a cognitive processing space where information is received, managed, transformed, and briefly stored. It is an operational process of transforming information for the execution of cognitive tasks in different and new ways. Many class room activities require children to remember information and mentally manipulate it. While the impact of chess training on intelligence and academic performance has been unequivocally established, its impact on working memory needs to be studied. This study, funded by the Cognitive Science Research Initiative, Department of Science & Technology, Government of India, analyzed the effect of one-year chess training on the working memory of children. A pretest–posttest with control group design was used, with 52 children in the experimental group and 50 children in the control group. The sample was selected from children studying in school (grades 3 to 9), which included both the genders. The experimental group underwent weekly chess training for one year, while the control group was involved in extracurricular activities. Working memory was measured by two subtests of WISC-IV INDIA. The Digit Span Subtest involves recalling a list of numbers of increasing length presented orally in forward and in reverse order, and the Letter–Number Sequencing Subtest involves rearranging jumbled alphabets and numbers presented orally following a given rule. Both tasks require the child to receive and briefly store information, manipulate it, and present it in a changed format. The Children were trained using Winning Moves curriculum, audio- visual learning method, hands-on- chess training and recording the games using score sheets, analyze their mistakes, thereby increasing their Meta-Analytical abilities. They were also trained in Opening theory, Checkmating techniques, End-game theory and Tactical principles. Pre equivalence of means was established. Analysis revealed that the experimental group had significant gains in working memory compared to the control group. The present study clearly establishes a link between chess training and working memory. The transfer of chess training to the improvement of working memory could be attributed to the fact that while playing chess, children evaluate positions, visualize new positions in their mind, analyze the pros and cons of each move, and choose moves based on the information stored in their mind. If working-memory’s capacity could be expanded or made to function more efficiently, it could result in the improvement of executive functions as well as the scholastic performance of the child.

Keywords: chess training, cognitive development, executive functions, school children, working memory

Procedia PDF Downloads 265
126 Insecticidal Activity of Bacillus Thuringiensis Strain AH-2 Against Hemiptera Insects Pests: Aphis. Gossypii, and Lepidoptera Insect Pests: Plutella Xylostella and Hyphantria Cunea

Authors: Ajuna B. Henry

Abstract:

In recent decades, climate change has demanded biological pesticides; more Bt strains are being discovered worldwide, some containing novel insecticidal genes while others have been modified through molecular approaches for increased yield, toxicity, and wider host target. In this study, B. thuringiensis strain AH-2 (Bt-2) was isolated from the soil and tested for insecticidal activity against Aphis gossypii (Hemiptera: Aphididae) and Lepidoptera insect pests: fall webworm (Hyphantria cunea) and diamondback moth (Plutella xylostella). A commercial strain B. thuringiensis subsp. kurstaki (Btk), and a chemical pesticide, imidacloprid (for Hemiptera) and chlorantraniliprole (for Lepidoptera), were used as positive control and the same media (without bacterial inoculum) as a negative control. For aphidicidal activity, Bt-2 caused a mortality rate of 70.2%, 78.1% or 88.4% in third instar nymphs of A. gossypii (3N) at 10%, 25% or 50% culture concentrations, respectively. Moreover, Bt-2 was effectively produced in cost-effective (PB) supplemented with either glucose (PBG) or sucrose (PBS) and maintained high aphicidal efficacy with 3N mortality rates of 85.9%, 82.9% or 82.2% in TSB, PBG or PBS media, respectively at 50% culture concentration. Bt-2 also suppressed adult fecundity by 98.3% compared to only 65.8% suppression by Btk at similar concentrations but was slightly lower than chemical treatment, which caused 100% suppression. Partial purification of 60 – 80% (NH4)2SO4 fraction of Bt-2 aphicidal proteins purified on anion exchange (DEAE-FF) column revealed a 105 kDa aphicidal protein with LC50 = 55.0 ng/µℓ. For Lepidoptera pests, chemical pesticide, Bt-2, and Btk cultures, mortality of 86.7%, 60%, and 60% in 3rd instar larvae of P. xylostella, and 96.7%, 80.0%, and 93.3% in 6th instar larvae of H. cunea, after 72h of exposure. When the entomopathogenic strains were cultured in a cost-effective PBG or PBS, the insecticidal activity in all strains was not significantly different compared to the use of a commercial medium (TSB). Bt-2 caused a mortality rate of 60.0%, 63.3%, and 50.0% against P. xylostella larvae and 76.7%, 83.3%, and 73.3% against H. cunea when grown in TSB, PBG, and PBS media, respectively. Bt-2 (grown in cost-effective PBG medium) caused a dose-dependent toxicity of 26.7%, 40.0%, and 63.3% against P. xylostella and 46.7%, 53.3%, and 76.7% against H. cunea at 10%, 25% and 50% culture concentration, respectively. The partially purified Bt-2 insecticidal proteins fractions F1, F2, F3, and F4 (extracted at different ratios of organic solvent) caused low toxicity (50.0%, 40.0%, 36.7%, and 30.0%) against P. xylostella and relatively high toxicity (56.7%, 76.7%, 66.7%, and 63.3%) against H. cunea at 100 µg/g of artificial diets. SDS-PAGE analysis revealed that a128kDa protein is associated with toxicity of Bt-2. Our result demonstrates a medium and strong larvicidal activity of Bt-2 against P. xylostella and H. cunea, respectively. Moreover, Bt-2 could be potentially produced using a cost-effective PBG medium which makes it an effective alternative biocontrol strategy to reduce chemical pesticide application.

Keywords: biocontrol, insect pests, larvae/nymph mortality, cost-effective media, aphis gossypii, plutella xylostella, hyphantria cunea, bacillus thuringiensi

Procedia PDF Downloads 20
125 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning

Authors: Hossein Havaeji, Tony Wong, Thien-My Dao

Abstract:

1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.

Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning

Procedia PDF Downloads 122
124 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 133
123 A Case of Prosthetic Vascular-Graft Infection Due to Mycobacterium fortuitum

Authors: Takaaki Nemoto

Abstract:

Case presentation: A 69-year-old Japanese man presented with a low-grade fever and fatigue that had persisted for one month. The patient had an aortic dissection on the aortic arch 13 years prior, an abdominal aortic aneurysm seven years prior, and an aortic dissection on the distal aortic arch one year prior, which were all treated with artificial blood-vessel replacement surgery. Laboratory tests revealed an inflammatory response (CRP 7.61 mg/dl), high serum creatinine (Cr 1.4 mg/dL), and elevated transaminase (AST 47 IU/L, ALT 45 IU/L). The patient was admitted to our hospital on suspicion of prosthetic vascular graft infection. Following further workups on the inflammatory response, an enhanced chest computed tomography (CT) and a non-enhanced chest DWI (MRI) were performed. The patient was diagnosed with a pulmonary fistula and a prosthetic vascular graft infection on the distal aortic arch. After admission, the patient was administered Ceftriaxion and Vancomycine for 10 days, but his fever and inflammatory response did not improve. On day 13 of hospitalization, a lung fistula repair surgery and an omental filling operation were performed, and Meropenem and Vancomycine were administered. The fever and inflammatory response continued, and therefore we took repeated blood cultures. M. fortuitum was detected in a blood culture on day 16 of hospitalization. As a result, we changed the treatment regimen to Amikacin (400 mg/day), Meropenem (2 g/day), and Cefmetazole (4 g/day), and the fever and inflammatory response began to decrease gradually. We performed a test of sensitivity for Mycobacterium fortuitum, and found that the MIC was low for fluoroquinolone antibacterial agent. The clinical course was good, and the patient was discharged after a total of 8 weeks of intravenous drug administration. At discharge, we changed the treatment regimen to Levofloxacin (500 mg/day) and Clarithromycin (800 mg/day), and prescribed these two drugs as a long life suppressive therapy. Discussion: There are few cases of prosthetic vascular graft infection caused by mycobacteria, and a standard therapy remains to be established. For prosthetic vascular graft infections, it is ideal to provide surgical and medical treatment in parallel, but in this case, surgical treatment was difficult and, therefore, a conservative treatment was chosen. We attempted to increase the treatment success rate of this refractory disease by conducting a susceptibility test for mycobacteria and treating with different combinations of antimicrobial agents, which was ultimately effective. With our treatment approach, a good clinical course was obtained and continues at the present stage. Conclusion: Although prosthetic vascular graft infection resulting from mycobacteria is a refractory infectious disease, it may be curative to administer appropriate antibiotics based on the susceptibility test in addition to surgical treatment.

Keywords: prosthetic vascular graft infection, lung fistula, Mycobacterium fortuitum, conservative treatment

Procedia PDF Downloads 157
122 How Restorative Justice Can Inform and Assist the Provision of Effective Remedies to Hate Crime, Case Study: The Christchurch Terrorist Attack

Authors: Daniel O. Kleinsman

Abstract:

The 2019 terrorist attack on two masjidain in Christchurch, New Zealand, was a shocking demonstration of the harm that can be caused by hate crime. As legal and governmental responses to the attack struggle to provide effective remedies to its victims, restorative justice has emerged as a tool that can assist, in terms of both meeting victims’ needs and discharging the obligations of the state under the International Covenant on Civil and Political Rights (ICCPR), arts 2(3), 26, 27. Restorative justice is a model that emphasizes the repair of harm caused or revealed by unjust behavior. It also prioritises the facilitation of dialogue, the restoration of equitable relationships, and the prevention of future harm. Returning to the case study, in the remarks of the sentencing judge, the terrorist’s actions were described as a hate crime of vicious malevolence that the Court was required to decisively reject, as anathema to the values of acceptance, tolerance and mutual respect upon which New Zealand’s inclusive society is based and which the country strives to maintain. This was one of the reasons for which the terrorist received a life sentence with no possibility of parole. However, in the report of the Royal Commission of Inquiry into the Attack, it was found that victims felt the attack occurred within the context of widespread racism, discrimination and Islamophobia, where hostile behaviors, including hate-based threats and attacks, were rarely recorded, analysed or acted on. It was also found that the Government had inappropriately concentrated intelligence resources on the risk of ‘Islamist’ terrorism and had failed to adequately respond to concerns raised about threats against the Muslim community. In this light, the remarks of the sentencing judge can be seen to reflect a criminal justice system that, in the absence of other remedies, denies systemic accountability and renders hate crime an isolated incident rather than an expression of more widespread discrimination and hate to be holistically addressed. One of the recommendations of the Royal Commission was to explore with victims the desirability and design of restorative justice processes. This presents an opportunity for victims to meet with state representatives and pursue effective remedies (ICCPR art 2(3)) not only for the harm caused by the terrorist but the harm revealed by a system that has exposed the minority Muslim community in New Zealand to hate in all forms, including but not limited to violent extremism. In this sense, restorative justice can also assist the state in discharging its wider obligations to protect all persons from discrimination (art 26) and allow ethnic and religious minorities to enjoy their own culture and profess and practice their own religion (art 27). It can also help give effect to the law and its purpose as a remedy to hate crime, as expressed in this case study by the sentencing judge.

Keywords: hate crime, restorative justice, minorities, victims' rights

Procedia PDF Downloads 111